Driver core changes for 6.4-rc1

Here is the large set of driver core changes for 6.4-rc1.
 
 Once again, a busy development cycle, with lots of changes happening in
 the driver core in the quest to be able to move "struct bus" and "struct
 class" into read-only memory, a task now complete with these changes.
 
 This will make the future rust interactions with the driver core more
 "provably correct" as well as providing more obvious lifetime rules for
 all busses and classes in the kernel.
 
 The changes required for this did touch many individual classes and
 busses as many callbacks were changed to take const * parameters
 instead.  All of these changes have been submitted to the various
 subsystem maintainers, giving them plenty of time to review, and most of
 them actually did so.
 
 Other than those changes, included in here are a small set of other
 things:
   - kobject logging improvements
   - cacheinfo improvements and updates
   - obligatory fw_devlink updates and fixes
   - documentation updates
   - device property cleanups and const * changes
   - firwmare loader dependency fixes.
 
 All of these have been in linux-next for a while with no reported
 problems.
 
 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
 -----BEGIN PGP SIGNATURE-----
 
 iG0EABECAC0WIQT0tgzFv3jCIUoxPcsxR9QN2y37KQUCZEp7Sw8cZ3JlZ0Brcm9h
 aC5jb20ACgkQMUfUDdst+ykitQCfamUHpxGcKOAGuLXMotXNakTEsxgAoIquENm5
 LEGadNS38k5fs+73UaxV
 =7K4B
 -----END PGP SIGNATURE-----

Merge tag 'driver-core-6.4-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core

Pull driver core updates from Greg KH:
 "Here is the large set of driver core changes for 6.4-rc1.

  Once again, a busy development cycle, with lots of changes happening
  in the driver core in the quest to be able to move "struct bus" and
  "struct class" into read-only memory, a task now complete with these
  changes.

  This will make the future rust interactions with the driver core more
  "provably correct" as well as providing more obvious lifetime rules
  for all busses and classes in the kernel.

  The changes required for this did touch many individual classes and
  busses as many callbacks were changed to take const * parameters
  instead. All of these changes have been submitted to the various
  subsystem maintainers, giving them plenty of time to review, and most
  of them actually did so.

  Other than those changes, included in here are a small set of other
  things:

   - kobject logging improvements

   - cacheinfo improvements and updates

   - obligatory fw_devlink updates and fixes

   - documentation updates

   - device property cleanups and const * changes

   - firwmare loader dependency fixes.

  All of these have been in linux-next for a while with no reported
  problems"

* tag 'driver-core-6.4-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core: (120 commits)
  device property: make device_property functions take const device *
  driver core: update comments in device_rename()
  driver core: Don't require dynamic_debug for initcall_debug probe timing
  firmware_loader: rework crypto dependencies
  firmware_loader: Strip off \n from customized path
  zram: fix up permission for the hot_add sysfs file
  cacheinfo: Add use_arch[|_cache]_info field/function
  arch_topology: Remove early cacheinfo error message if -ENOENT
  cacheinfo: Check cache properties are present in DT
  cacheinfo: Check sib_leaf in cache_leaves_are_shared()
  cacheinfo: Allow early level detection when DT/ACPI info is missing/broken
  cacheinfo: Add arm64 early level initializer implementation
  cacheinfo: Add arch specific early level initializer
  tty: make tty_class a static const structure
  driver core: class: remove struct class_interface * from callbacks
  driver core: class: mark the struct class in struct class_interface constant
  driver core: class: make class_register() take a const *
  driver core: class: mark class_release() as taking a const *
  driver core: remove incorrect comment for device_create*
  MIPS: vpe-cmp: remove module owner pointer from struct class usage.
  ...
This commit is contained in:
Linus Torvalds 2023-04-27 11:53:57 -07:00
commit 556eb8b791
297 changed files with 1691 additions and 1296 deletions

View File

@ -21,4 +21,9 @@ Description:
at the time the kernel starts are not affected or limited in
any way by sync_state() callbacks.
Writing "1" to this file will force a call to the device's
sync_state() function if it hasn't been called already. The
sync_state() call happens independent of the state of the
consumer devices.

View File

@ -1602,6 +1602,20 @@
dependencies. This only applies for fw_devlink=on|rpm.
Format: <bool>
fw_devlink.sync_state =
[KNL] When all devices that could probe have finished
probing, this parameter controls what to do with
devices that haven't yet received their sync_state()
calls.
Format: { strict | timeout }
strict -- Default. Continue waiting on consumers to
probe successfully.
timeout -- Give up waiting on consumers and call
sync_state() on any devices that haven't yet
received their sync_state() calls after
deferred_probe_timeout has expired or by
late_initcall() if !CONFIG_MODULES.
gamecon.map[2|3]=
[HW,JOY] Multisystem joystick and NES/SNES/PSX pad
support via parallel port (up to 5 devices per port)
@ -6150,15 +6164,6 @@
later by a loaded module cannot be set this way.
Example: sysctl.vm.swappiness=40
sysfs.deprecated=0|1 [KNL]
Enable/disable old style sysfs layout for old udev
on older distributions. When this option is enabled
very new udev will not work anymore. When this option
is disabled (or CONFIG_SYSFS_DEPRECATED not compiled)
in older udev will not work anymore.
Default depends on CONFIG_SYSFS_DEPRECATED_V2 set in
the kernel configuration.
sysrq_always_enabled
[KNL]
Ignore sysrq setting - this boot parameter will

View File

@ -125,8 +125,8 @@ Exporting Attributes
struct bus_attribute {
struct attribute attr;
ssize_t (*show)(struct bus_type *, char * buf);
ssize_t (*store)(struct bus_type *, const char * buf, size_t count);
ssize_t (*show)(const struct bus_type *, char * buf);
ssize_t (*store)(const struct bus_type *, const char * buf, size_t count);
};
Bus drivers can export attributes using the BUS_ATTR_RW macro that works

View File

@ -57,7 +57,8 @@ function calls firmware_upload_unregister() such as::
len = (truncate) ? truncate - fw_name : strlen(fw_name);
sec->fw_name = kmemdup_nul(fw_name, len, GFP_KERNEL);
fwl = firmware_upload_register(sec->dev, sec->fw_name, &m10bmc_ops, sec);
fwl = firmware_upload_register(THIS_MODULE, sec->dev, sec->fw_name,
&m10bmc_ops, sec);
if (IS_ERR(fwl)) {
dev_err(sec->dev, "Firmware Upload driver failed to start\n");
kfree(sec->fw_name);

View File

@ -373,8 +373,8 @@ Structure::
struct bus_attribute {
struct attribute attr;
ssize_t (*show)(struct bus_type *, char * buf);
ssize_t (*store)(struct bus_type *, const char * buf, size_t count);
ssize_t (*show)(const struct bus_type *, char * buf);
ssize_t (*store)(const struct bus_type *, const char * buf, size_t count);
};
Declaring::

View File

@ -329,8 +329,8 @@ void device_remove_file(struct device *dev, const struct device_attribute * attr
struct bus_attribute {
struct attribute attr;
ssize_t (*show)(struct bus_type *, char * buf);
ssize_t (*store)(struct bus_type *, const char * buf, size_t count);
ssize_t (*show)(const struct bus_type *, char * buf);
ssize_t (*store)(const struct bus_type *, const char * buf, size_t count);
};
声明:

View File

@ -332,8 +332,8 @@ void device_remove_file(struct device *dev, const struct device_attribute * attr
struct bus_attribute {
struct attribute attr;
ssize_t (*show)(struct bus_type *, char * buf);
ssize_t (*store)(struct bus_type *, const char * buf, size_t count);
ssize_t (*show)(const struct bus_type *, char * buf);
ssize_t (*store)(const struct bus_type *, const char * buf, size_t count);
};
聲明:

View File

@ -6333,7 +6333,9 @@ F: drivers/base/
F: fs/debugfs/
F: fs/sysfs/
F: include/linux/debugfs.h
F: include/linux/fwnode.h
F: include/linux/kobj*
F: include/linux/property.h
F: lib/kobj*
DRIVERS FOR OMAP ADAPTIVE VOLTAGE SCALING (AVS)

View File

@ -24,7 +24,7 @@ struct dma_iommu_mapping {
};
struct dma_iommu_mapping *
arm_iommu_create_mapping(struct bus_type *bus, dma_addr_t base, u64 size);
arm_iommu_create_mapping(const struct bus_type *bus, dma_addr_t base, u64 size);
void arm_iommu_release_mapping(struct dma_iommu_mapping *mapping);

View File

@ -1543,7 +1543,7 @@ static const struct dma_map_ops iommu_ops = {
* arm_iommu_attach_device function.
*/
struct dma_iommu_mapping *
arm_iommu_create_mapping(struct bus_type *bus, dma_addr_t base, u64 size)
arm_iommu_create_mapping(const struct bus_type *bus, dma_addr_t base, u64 size)
{
unsigned int bits = size >> PAGE_SHIFT;
unsigned int bitmap_size = BITS_TO_LONGS(bits) * sizeof(long);

View File

@ -38,11 +38,9 @@ static void ci_leaf_init(struct cacheinfo *this_leaf,
this_leaf->type = type;
}
int init_cache_level(unsigned int cpu)
static void detect_cache_level(unsigned int *level_p, unsigned int *leaves_p)
{
unsigned int ctype, level, leaves;
int fw_level, ret;
struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
for (level = 1, leaves = 0; level <= MAX_CACHE_LEVEL; level++) {
ctype = get_cache_type(level);
@ -54,6 +52,27 @@ int init_cache_level(unsigned int cpu)
leaves += (ctype == CACHE_TYPE_SEPARATE) ? 2 : 1;
}
*level_p = level;
*leaves_p = leaves;
}
int early_cache_level(unsigned int cpu)
{
struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
detect_cache_level(&this_cpu_ci->num_levels, &this_cpu_ci->num_leaves);
return 0;
}
int init_cache_level(unsigned int cpu)
{
unsigned int level, leaves;
int fw_level, ret;
struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
detect_cache_level(&level, &leaves);
if (acpi_disabled) {
fw_level = of_find_last_cache_level(cpu);
} else {

View File

@ -1497,10 +1497,18 @@ static const DEVICE_ATTR_RO(aarch32_el0);
static int __init aarch32_el0_sysfs_init(void)
{
struct device *dev_root;
int ret = 0;
if (!allow_mismatched_32bit_el0)
return 0;
return device_create_file(cpu_subsys.dev_root, &dev_attr_aarch32_el0);
dev_root = bus_get_dev_root(&cpu_subsys);
if (dev_root) {
ret = device_create_file(dev_root, &dev_attr_aarch32_el0);
put_device(dev_root);
}
return ret;
}
device_initcall(aarch32_el0_sysfs_init);

View File

@ -234,7 +234,7 @@ static int __init mips_mt_init(void)
{
struct class *mtc;
mtc = class_create(THIS_MODULE, "mt");
mtc = class_create("mt");
if (IS_ERR(mtc))
return PTR_ERR(mtc);

View File

@ -79,7 +79,6 @@ static void vpe_device_release(struct device *cd)
static struct class vpe_class = {
.name = "vpe",
.owner = THIS_MODULE,
.dev_release = vpe_device_release,
.dev_groups = vpe_groups,
};

View File

@ -316,7 +316,6 @@ static void vpe_device_release(struct device *cd)
static struct class vpe_class = {
.name = "vpe",
.owner = THIS_MODULE,
.dev_release = vpe_device_release,
.dev_groups = vpe_groups,
};

View File

@ -550,7 +550,7 @@ static int __init sbprof_tb_init(void)
return -EIO;
}
tbc = class_create(THIS_MODULE, "sb_tracebuffer");
tbc = class_create("sb_tracebuffer");
if (IS_ERR(tbc)) {
err = PTR_ERR(tbc);
goto out_chrdev;

View File

@ -217,13 +217,18 @@ static DEVICE_ATTR(dscr_default, 0600,
static void __init sysfs_create_dscr_default(void)
{
if (cpu_has_feature(CPU_FTR_DSCR)) {
struct device *dev_root;
int cpu;
dscr_default = spr_default_dscr;
for_each_possible_cpu(cpu)
paca_ptrs[cpu]->dscr_default = dscr_default;
device_create_file(cpu_subsys.dev_root, &dev_attr_dscr_default);
dev_root = bus_get_dev_root(&cpu_subsys);
if (dev_root) {
device_create_file(dev_root, &dev_attr_dscr_default);
put_device(dev_root);
}
}
}
#endif /* CONFIG_PPC64 */
@ -746,7 +751,12 @@ static DEVICE_ATTR(svm, 0444, show_svm, NULL);
static void __init create_svm_file(void)
{
device_create_file(cpu_subsys.dev_root, &dev_attr_svm);
struct device *dev_root = bus_get_dev_root(&cpu_subsys);
if (dev_root) {
device_create_file(dev_root, &dev_attr_svm);
put_device(dev_root);
}
}
#else
static void __init create_svm_file(void)

View File

@ -581,7 +581,7 @@ int vas_register_coproc_api(struct module *mod, enum vas_cop_type cop_type,
pr_devel("%s device allocated, dev [%i,%i]\n", name,
MAJOR(coproc_device.devt), MINOR(coproc_device.devt));
coproc_device.class = class_create(mod, name);
coproc_device.class = class_create(name);
if (IS_ERR(coproc_device.class)) {
rc = PTR_ERR(coproc_device.class);
pr_err("Unable to create %s class %d\n", name, rc);

View File

@ -1464,14 +1464,19 @@ static int __init pnv_init_idle_states(void)
power7_fastsleep_workaround_entry = false;
power7_fastsleep_workaround_exit = false;
} else {
struct device *dev_root;
/*
* OPAL_PM_SLEEP_ENABLED_ER1 is set. It indicates that
* workaround is needed to use fastsleep. Provide sysfs
* control to choose how this workaround has to be
* applied.
*/
device_create_file(cpu_subsys.dev_root,
&dev_attr_fastsleep_workaround_applyonce);
dev_root = bus_get_dev_root(&cpu_subsys);
if (dev_root) {
device_create_file(dev_root,
&dev_attr_fastsleep_workaround_applyonce);
put_device(dev_root);
}
}
update_subcore_sibling_mask();

View File

@ -415,7 +415,9 @@ static DEVICE_ATTR(subcores_per_core, 0644,
static int subcore_init(void)
{
struct device *dev_root;
unsigned pvr_ver;
int rc = 0;
pvr_ver = PVR_VER(mfspr(SPRN_PVR));
@ -435,7 +437,11 @@ static int subcore_init(void)
set_subcores_per_core(1);
return device_create_file(cpu_subsys.dev_root,
&dev_attr_subcores_per_core);
dev_root = bus_get_dev_root(&cpu_subsys);
if (dev_root) {
rc = device_create_file(dev_root, &dev_attr_subcores_per_core);
put_device(dev_root);
}
return rc;
}
machine_device_initcall(powernv, subcore_init);

View File

@ -512,7 +512,7 @@ static int dlpar_parse_id_type(char **cmd, struct pseries_hp_errorlog *hp_elog)
return 0;
}
static ssize_t dlpar_store(struct class *class, struct class_attribute *attr,
static ssize_t dlpar_store(const struct class *class, const struct class_attribute *attr,
const char *buf, size_t count)
{
struct pseries_hp_errorlog hp_elog;
@ -551,7 +551,7 @@ dlpar_store_out:
return rc ? rc : count;
}
static ssize_t dlpar_show(struct class *class, struct class_attribute *attr,
static ssize_t dlpar_show(const struct class *class, const struct class_attribute *attr,
char *buf)
{
return sprintf(buf, "%s\n", "memory,cpu");

View File

@ -267,7 +267,7 @@ static char *ibmebus_chomp(const char *in, size_t count)
return out;
}
static ssize_t probe_store(struct bus_type *bus, const char *buf, size_t count)
static ssize_t probe_store(const struct bus_type *bus, const char *buf, size_t count)
{
struct device_node *dn = NULL;
struct device *dev;
@ -305,7 +305,7 @@ out:
}
static BUS_ATTR_WO(probe);
static ssize_t remove_store(struct bus_type *bus, const char *buf, size_t count)
static ssize_t remove_store(const struct bus_type *bus, const char *buf, size_t count)
{
struct device *dev;
char *path;

View File

@ -787,8 +787,8 @@ int rtas_syscall_dispatch_ibm_suspend_me(u64 handle)
return pseries_migrate_partition(handle);
}
static ssize_t migration_store(struct class *class,
struct class_attribute *attr, const char *buf,
static ssize_t migration_store(const struct class *class,
const struct class_attribute *attr, const char *buf,
size_t count)
{
u64 streamid;

View File

@ -300,20 +300,22 @@ static struct device_attribute attr_percpu_deactivate_hint =
static int __init pseries_energy_init(void)
{
int cpu, err;
struct device *cpu_dev;
struct device *cpu_dev, *dev_root;
if (!firmware_has_feature(FW_FEATURE_BEST_ENERGY))
return 0; /* H_BEST_ENERGY hcall not supported */
/* Create the sysfs files */
err = device_create_file(cpu_subsys.dev_root,
&attr_cpu_activate_hint_list);
if (!err)
err = device_create_file(cpu_subsys.dev_root,
&attr_cpu_deactivate_hint_list);
dev_root = bus_get_dev_root(&cpu_subsys);
if (dev_root) {
err = device_create_file(dev_root, &attr_cpu_activate_hint_list);
if (!err)
err = device_create_file(dev_root, &attr_cpu_deactivate_hint_list);
put_device(dev_root);
if (err)
return err;
}
if (err)
return err;
for_each_possible_cpu(cpu) {
cpu_dev = get_cpu_device(cpu);
err = device_create_file(cpu_dev,
@ -337,14 +339,18 @@ static int __init pseries_energy_init(void)
static void __exit pseries_energy_cleanup(void)
{
int cpu;
struct device *cpu_dev;
struct device *cpu_dev, *dev_root;
if (!sysfs_entries)
return;
/* Remove the sysfs files */
device_remove_file(cpu_subsys.dev_root, &attr_cpu_activate_hint_list);
device_remove_file(cpu_subsys.dev_root, &attr_cpu_deactivate_hint_list);
dev_root = bus_get_dev_root(&cpu_subsys);
if (dev_root) {
device_remove_file(dev_root, &attr_cpu_activate_hint_list);
device_remove_file(dev_root, &attr_cpu_deactivate_hint_list);
put_device(dev_root);
}
for_each_possible_cpu(cpu) {
cpu_dev = get_cpu_device(cpu);

View File

@ -143,6 +143,7 @@ static const struct platform_suspend_ops pseries_suspend_ops = {
**/
static int pseries_suspend_sysfs_register(struct device *dev)
{
struct device *dev_root;
int rc;
if ((rc = subsys_system_register(&suspend_subsys, NULL)))
@ -151,8 +152,13 @@ static int pseries_suspend_sysfs_register(struct device *dev)
dev->id = 0;
dev->bus = &suspend_subsys;
if ((rc = device_create_file(suspend_subsys.dev_root, &dev_attr_hibernate)))
goto subsys_unregister;
dev_root = bus_get_dev_root(&suspend_subsys);
if (dev_root) {
rc = device_create_file(dev_root, &dev_attr_hibernate);
put_device(dev_root);
if (rc)
goto subsys_unregister;
}
return 0;

View File

@ -1006,7 +1006,7 @@ ATTRIBUTE_GROUPS(vio_cmo_dev);
/* sysfs bus functions and data structures for CMO */
#define viobus_cmo_rd_attr(name) \
static ssize_t cmo_bus_##name##_show(struct bus_type *bt, char *buf) \
static ssize_t cmo_bus_##name##_show(const struct bus_type *bt, char *buf) \
{ \
return sprintf(buf, "%lu\n", vio_cmo.name); \
} \
@ -1015,7 +1015,7 @@ static struct bus_attribute bus_attr_cmo_bus_##name = \
#define viobus_cmo_pool_rd_attr(name, var) \
static ssize_t \
cmo_##name##_##var##_show(struct bus_type *bt, char *buf) \
cmo_##name##_##var##_show(const struct bus_type *bt, char *buf) \
{ \
return sprintf(buf, "%lu\n", vio_cmo.name.var); \
} \
@ -1030,12 +1030,12 @@ viobus_cmo_pool_rd_attr(reserve, size);
viobus_cmo_pool_rd_attr(excess, size);
viobus_cmo_pool_rd_attr(excess, free);
static ssize_t cmo_high_show(struct bus_type *bt, char *buf)
static ssize_t cmo_high_show(const struct bus_type *bt, char *buf)
{
return sprintf(buf, "%lu\n", vio_cmo.high);
}
static ssize_t cmo_high_store(struct bus_type *bt, const char *buf,
static ssize_t cmo_high_store(const struct bus_type *bt, const char *buf,
size_t count)
{
unsigned long flags;

View File

@ -116,7 +116,8 @@ static struct device_attribute mpic_attributes = __ATTR(timer_wakeup, 0644,
static int __init fsl_wakeup_sys_init(void)
{
int ret;
struct device *dev_root;
int ret = -EINVAL;
fsl_wakeup = kzalloc(sizeof(struct fsl_mpic_timer_wakeup), GFP_KERNEL);
if (!fsl_wakeup)
@ -124,16 +125,26 @@ static int __init fsl_wakeup_sys_init(void)
INIT_WORK(&fsl_wakeup->free_work, fsl_free_resource);
ret = device_create_file(mpic_subsys.dev_root, &mpic_attributes);
if (ret)
kfree(fsl_wakeup);
dev_root = bus_get_dev_root(&mpic_subsys);
if (dev_root) {
ret = device_create_file(dev_root, &mpic_attributes);
put_device(dev_root);
if (ret)
kfree(fsl_wakeup);
}
return ret;
}
static void __exit fsl_wakeup_sys_exit(void)
{
device_remove_file(mpic_subsys.dev_root, &mpic_attributes);
struct device *dev_root;
dev_root = bus_get_dev_root(&mpic_subsys);
if (dev_root) {
device_remove_file(dev_root, &mpic_attributes);
put_device(dev_root);
}
mutex_lock(&sysfs_lock);

View File

@ -1227,11 +1227,17 @@ static DEVICE_ATTR_WO(rescan);
static int __init s390_smp_init(void)
{
struct device *dev_root;
int cpu, rc = 0;
rc = device_create_file(cpu_subsys.dev_root, &dev_attr_rescan);
if (rc)
return rc;
dev_root = bus_get_dev_root(&cpu_subsys);
if (dev_root) {
rc = device_create_file(dev_root, &dev_attr_rescan);
put_device(dev_root);
if (rc)
return rc;
}
for_each_present_cpu(cpu) {
rc = smp_add_present_cpu(cpu);
if (rc)

View File

@ -649,12 +649,21 @@ static struct ctl_table topology_dir_table[] = {
static int __init topology_init(void)
{
struct device *dev_root;
int rc = 0;
timer_setup(&topology_timer, topology_timer_fn, TIMER_DEFERRABLE);
if (MACHINE_HAS_TOPOLOGY)
set_topology_timer();
else
topology_update_polarization_simple();
register_sysctl_table(topology_dir_table);
return device_create_file(cpu_subsys.dev_root, &dev_attr_dispatching);
dev_root = bus_get_dev_root(&cpu_subsys);
if (dev_root) {
rc = device_create_file(dev_root, &dev_attr_dispatching);
put_device(dev_root);
}
return rc;
}
device_initcall(topology_init);

View File

@ -45,13 +45,19 @@ static DEVICE_ATTR(devices, S_IRUGO, dma_show_devices, NULL);
static int __init dma_subsys_init(void)
{
struct device *dev_root;
int ret;
ret = subsys_system_register(&dma_subsys, NULL);
if (unlikely(ret))
return ret;
return device_create_file(dma_subsys.dev_root, &dev_attr_devices);
dev_root = bus_get_dev_root(&dma_subsys);
if (dev_root) {
ret = device_create_file(dev_root, &dev_attr_devices);
put_device(dev_root);
}
return ret;
}
postcore_initcall(dma_subsys_init);

View File

@ -632,6 +632,7 @@ static const struct attribute_group cpu_root_microcode_group = {
static int __init microcode_init(void)
{
struct device *dev_root;
struct cpuinfo_x86 *c = &boot_cpu_data;
int error;
@ -652,10 +653,14 @@ static int __init microcode_init(void)
if (IS_ERR(microcode_pdev))
return PTR_ERR(microcode_pdev);
error = sysfs_create_group(&cpu_subsys.dev_root->kobj, &cpu_root_microcode_group);
if (error) {
pr_err("Error creating microcode group!\n");
goto out_pdev;
dev_root = bus_get_dev_root(&cpu_subsys);
if (dev_root) {
error = sysfs_create_group(&dev_root->kobj, &cpu_root_microcode_group);
put_device(dev_root);
if (error) {
pr_err("Error creating microcode group!\n");
goto out_pdev;
}
}
/* Do per-CPU setup */

View File

@ -1580,7 +1580,7 @@ int rdt_pseudo_lock_init(void)
pseudo_lock_major = ret;
pseudo_lock_class = class_create(THIS_MODULE, "pseudo_lock");
pseudo_lock_class = class_create("pseudo_lock");
if (IS_ERR(pseudo_lock_class)) {
ret = PTR_ERR(pseudo_lock_class);
unregister_chrdev(pseudo_lock_major, "pseudo_lock");

View File

@ -232,7 +232,11 @@ static int __init umwait_init(void)
* Add umwait control interface. Ignore failure, so at least the
* default values are set up in case the machine manages to boot.
*/
dev = cpu_subsys.dev_root;
return sysfs_create_group(&dev->kobj, &umwait_attr_group);
dev = bus_get_dev_root(&cpu_subsys);
if (dev) {
ret = sysfs_create_group(&dev->kobj, &umwait_attr_group);
put_device(dev);
}
return ret;
}
device_initcall(umwait_init);

View File

@ -154,7 +154,7 @@ static int __init cpuid_init(void)
CPUID_MAJOR);
return -EBUSY;
}
cpuid_class = class_create(THIS_MODULE, "cpuid");
cpuid_class = class_create("cpuid");
if (IS_ERR(cpuid_class)) {
err = PTR_ERR(cpuid_class);
goto out_chrdev;

View File

@ -263,7 +263,7 @@ static int __init msr_init(void)
pr_err("unable to get major %d for msr\n", MSR_MAJOR);
return -EBUSY;
}
msr_class = class_create(THIS_MODULE, "msr");
msr_class = class_create("msr");
if (IS_ERR(msr_class)) {
err = PTR_ERR(msr_class);
goto out_chrdev;

View File

@ -245,7 +245,7 @@ static int __init bsg_init(void)
dev_t devid;
int ret;
bsg_class = class_create(THIS_MODULE, "bsg");
bsg_class = class_create("bsg");
if (IS_ERR(bsg_class))
return PTR_ERR(bsg_class);
bsg_class->devnode = bsg_devnode;

View File

@ -475,12 +475,10 @@ int __must_check device_add_disk(struct device *parent, struct gendisk *disk,
if (ret)
goto out_device_del;
if (!sysfs_deprecated) {
ret = sysfs_create_link(block_depr, &ddev->kobj,
kobject_name(&ddev->kobj));
if (ret)
goto out_device_del;
}
ret = sysfs_create_link(block_depr, &ddev->kobj,
kobject_name(&ddev->kobj));
if (ret)
goto out_device_del;
/*
* avoid probable deadlock caused by allocating memory with
@ -563,8 +561,7 @@ out_put_holder_dir:
out_del_integrity:
blk_integrity_del(disk);
out_del_block_link:
if (!sysfs_deprecated)
sysfs_remove_link(block_depr, dev_name(ddev));
sysfs_remove_link(block_depr, dev_name(ddev));
out_device_del:
device_del(ddev);
out_free_ext_minor:
@ -666,8 +663,7 @@ void del_gendisk(struct gendisk *disk)
part_stat_set_all(disk->part0, 0);
disk->part0->bd_stamp = 0;
if (!sysfs_deprecated)
sysfs_remove_link(block_depr, dev_name(disk_to_dev(disk)));
sysfs_remove_link(block_depr, dev_name(disk_to_dev(disk)));
pm_runtime_set_memalloc_noio(disk_to_dev(disk), false);
device_del(disk_to_dev(disk));
@ -912,7 +908,6 @@ static int __init genhd_device_init(void)
{
int error;
block_class.dev_kobj = sysfs_dev_block_kobj;
error = class_register(&block_class);
if (unlikely(error))
return error;
@ -921,8 +916,7 @@ static int __init genhd_device_init(void)
register_blkdev(BLOCK_EXT_MAJOR, "blkext");
/* create top-level block dir */
if (!sysfs_deprecated)
block_depr = kobject_create_and_add("block", NULL);
block_depr = kobject_create_and_add("block", NULL);
return 0;
}

View File

@ -34,7 +34,7 @@ static char *accel_devnode(const struct device *dev, umode_t *mode)
static int accel_sysfs_init(void)
{
accel_class = class_create(THIS_MODULE, "accel");
accel_class = class_create("accel");
if (IS_ERR(accel_class))
return PTR_ERR(accel_class);

View File

@ -696,7 +696,7 @@ static int __init hl_init(void)
hl_major = MAJOR(dev);
hl_class = class_create(THIS_MODULE, HL_NAME);
hl_class = class_create(HL_NAME);
if (IS_ERR(hl_class)) {
pr_err("failed to allocate class\n");
rc = PTR_ERR(hl_class);

View File

@ -98,6 +98,12 @@ EXPORT_SYMBOL_GPL(lpit_read_residency_count_address);
static void lpit_update_residency(struct lpit_residency_info *info,
struct acpi_lpit_native *lpit_native)
{
struct device *dev_root = bus_get_dev_root(&cpu_subsys);
/* Silently fail, if cpuidle attribute group is not present */
if (!dev_root)
return;
info->frequency = lpit_native->counter_frequency ?
lpit_native->counter_frequency : tsc_khz * 1000;
if (!info->frequency)
@ -108,18 +114,18 @@ static void lpit_update_residency(struct lpit_residency_info *info,
info->iomem_addr = ioremap(info->gaddr.address,
info->gaddr.bit_width / 8);
if (!info->iomem_addr)
return;
goto exit;
/* Silently fail, if cpuidle attribute group is not present */
sysfs_add_file_to_group(&cpu_subsys.dev_root->kobj,
sysfs_add_file_to_group(&dev_root->kobj,
&dev_attr_low_power_idle_system_residency_us.attr,
"cpuidle");
} else if (info->gaddr.space_id == ACPI_ADR_SPACE_FIXED_HARDWARE) {
/* Silently fail, if cpuidle attribute group is not present */
sysfs_add_file_to_group(&cpu_subsys.dev_root->kobj,
sysfs_add_file_to_group(&dev_root->kobj,
&dev_attr_low_power_idle_cpu_residency_us.attr,
"cpuidle");
}
exit:
put_device(dev_root);
}
static void lpit_process(u64 begin, u64 end)

View File

@ -557,8 +557,7 @@ void pata_parport_unregister_driver(struct pi_protocol *pr)
}
EXPORT_SYMBOL_GPL(pata_parport_unregister_driver);
static ssize_t new_device_store(struct bus_type *bus, const char *buf,
size_t count)
static ssize_t new_device_store(const struct bus_type *bus, const char *buf, size_t count)
{
char port[12] = "auto";
char protocol[8] = "auto";
@ -632,8 +631,7 @@ static void pi_remove_one(struct device *dev)
/* pata_parport_dev_release will do ida_free(dev->id) and kfree(pi) */
}
static ssize_t delete_device_store(struct bus_type *bus, const char *buf,
size_t count)
static ssize_t delete_device_store(const struct bus_type *bus, const char *buf, size_t count)
{
struct device *dev;

View File

@ -230,4 +230,16 @@ config GENERIC_ARCH_NUMA
Enable support for generic NUMA implementation. Currently, RISC-V
and ARM64 use it.
config FW_DEVLINK_SYNC_STATE_TIMEOUT
bool "sync_state() behavior defaults to timeout instead of strict"
help
This is build time equivalent of adding kernel command line parameter
"fw_devlink.sync_state=timeout". Give up waiting on consumers and
call sync_state() on any devices that haven't yet received their
sync_state() calls after deferred_probe_timeout has expired or by
late_initcall() if !CONFIG_MODULES. You should almost always want to
select N here unless you have already successfully tested with the
command line option on every system/board your kernel is expected to
work on.
endmenu

View File

@ -835,18 +835,19 @@ void __init init_cpu_topology(void)
if (ret) {
/*
* Discard anything that was parsed if we hit an error so we
* don't use partial information.
* don't use partial information. But do not return yet to give
* arch-specific early cache level detection a chance to run.
*/
reset_cpu_topology();
return;
}
for_each_possible_cpu(cpu) {
ret = fetch_cache_info(cpu);
if (ret) {
if (!ret)
continue;
else if (ret != -ENOENT)
pr_err("Early cacheinfo failed, ret = %d\n", ret);
break;
}
return;
}
}

View File

@ -27,11 +27,13 @@
* on this bus.
* @bus - pointer back to the struct bus_type that this structure is associated
* with.
* @dev_root: Default device to use as the parent.
*
* @glue_dirs - "glue" directory to put in-between the parent device to
* avoid namespace conflicts
* @class - pointer back to the struct class that this structure is associated
* with.
* @lock_key: Lock class key for use by the lock validator
*
* This structure is the one that is the actual kobject allowing struct
* bus_type/class to be statically allocated safely. Nothing outside of the
@ -48,10 +50,11 @@ struct subsys_private {
struct klist klist_drivers;
struct blocking_notifier_head bus_notifier;
unsigned int drivers_autoprobe:1;
struct bus_type *bus;
const struct bus_type *bus;
struct device *dev_root;
struct kset glue_dirs;
struct class *class;
const struct class *class;
struct lock_class_key lock_key;
};
@ -70,6 +73,8 @@ static inline void subsys_put(struct subsys_private *sp)
kset_put(&sp->subsys);
}
struct subsys_private *class_to_subsys(const struct class *class);
struct driver_private {
struct kobject kobj;
struct klist klist_devices;
@ -122,69 +127,73 @@ struct device_private {
container_of(obj, struct device_private, knode_class)
/* initialisation functions */
extern int devices_init(void);
extern int buses_init(void);
extern int classes_init(void);
extern int firmware_init(void);
int devices_init(void);
int buses_init(void);
int classes_init(void);
int firmware_init(void);
#ifdef CONFIG_SYS_HYPERVISOR
extern int hypervisor_init(void);
int hypervisor_init(void);
#else
static inline int hypervisor_init(void) { return 0; }
#endif
extern int platform_bus_init(void);
extern void cpu_dev_init(void);
extern void container_dev_init(void);
int platform_bus_init(void);
void cpu_dev_init(void);
void container_dev_init(void);
#ifdef CONFIG_AUXILIARY_BUS
extern void auxiliary_bus_init(void);
void auxiliary_bus_init(void);
#else
static inline void auxiliary_bus_init(void) { }
#endif
struct kobject *virtual_device_parent(struct device *dev);
extern int bus_add_device(struct device *dev);
extern void bus_probe_device(struct device *dev);
extern void bus_remove_device(struct device *dev);
int bus_add_device(struct device *dev);
void bus_probe_device(struct device *dev);
void bus_remove_device(struct device *dev);
void bus_notify(struct device *dev, enum bus_notifier_event value);
bool bus_is_registered(const struct bus_type *bus);
extern int bus_add_driver(struct device_driver *drv);
extern void bus_remove_driver(struct device_driver *drv);
extern void device_release_driver_internal(struct device *dev,
struct device_driver *drv,
struct device *parent);
int bus_add_driver(struct device_driver *drv);
void bus_remove_driver(struct device_driver *drv);
void device_release_driver_internal(struct device *dev, struct device_driver *drv,
struct device *parent);
extern void driver_detach(struct device_driver *drv);
extern void driver_deferred_probe_del(struct device *dev);
extern void device_set_deferred_probe_reason(const struct device *dev,
struct va_format *vaf);
void driver_detach(struct device_driver *drv);
void driver_deferred_probe_del(struct device *dev);
void device_set_deferred_probe_reason(const struct device *dev, struct va_format *vaf);
static inline int driver_match_device(struct device_driver *drv,
struct device *dev)
{
return drv->bus->match ? drv->bus->match(dev, drv) : 1;
}
extern int driver_add_groups(struct device_driver *drv,
const struct attribute_group **groups);
extern void driver_remove_groups(struct device_driver *drv,
const struct attribute_group **groups);
static inline void dev_sync_state(struct device *dev)
{
if (dev->bus->sync_state)
dev->bus->sync_state(dev);
else if (dev->driver && dev->driver->sync_state)
dev->driver->sync_state(dev);
}
int driver_add_groups(struct device_driver *drv, const struct attribute_group **groups);
void driver_remove_groups(struct device_driver *drv, const struct attribute_group **groups);
void device_driver_detach(struct device *dev);
extern int devres_release_all(struct device *dev);
extern void device_block_probing(void);
extern void device_unblock_probing(void);
extern void deferred_probe_extend_timeout(void);
extern void driver_deferred_probe_trigger(void);
int devres_release_all(struct device *dev);
void device_block_probing(void);
void device_unblock_probing(void);
void deferred_probe_extend_timeout(void);
void driver_deferred_probe_trigger(void);
const char *device_get_devnode(const struct device *dev, umode_t *mode,
kuid_t *uid, kgid_t *gid, const char **tmp);
/* /sys/devices directory */
extern struct kset *devices_kset;
extern void devices_kset_move_last(struct device *dev);
void devices_kset_move_last(struct device *dev);
#if defined(CONFIG_MODULES) && defined(CONFIG_SYSFS)
extern void module_add_driver(struct module *mod, struct device_driver *drv);
extern void module_remove_driver(struct device_driver *drv);
void module_add_driver(struct module *mod, struct device_driver *drv);
void module_remove_driver(struct device_driver *drv);
#else
static inline void module_add_driver(struct module *mod,
struct device_driver *drv) { }
@ -192,23 +201,34 @@ static inline void module_remove_driver(struct device_driver *drv) { }
#endif
#ifdef CONFIG_DEVTMPFS
extern int devtmpfs_init(void);
int devtmpfs_init(void);
#else
static inline int devtmpfs_init(void) { return 0; }
#endif
#ifdef CONFIG_BLOCK
extern struct class block_class;
static inline bool is_blockdev(struct device *dev)
{
return dev->class == &block_class;
}
#else
static inline bool is_blockdev(struct device *dev) { return false; }
#endif
/* Device links support */
extern int device_links_read_lock(void);
extern void device_links_read_unlock(int idx);
extern int device_links_read_lock_held(void);
extern int device_links_check_suppliers(struct device *dev);
extern void device_links_force_bind(struct device *dev);
extern void device_links_driver_bound(struct device *dev);
extern void device_links_driver_cleanup(struct device *dev);
extern void device_links_no_driver(struct device *dev);
extern bool device_links_busy(struct device *dev);
extern void device_links_unbind_consumers(struct device *dev);
extern void fw_devlink_drivers_done(void);
int device_links_read_lock(void);
void device_links_read_unlock(int idx);
int device_links_read_lock_held(void);
int device_links_check_suppliers(struct device *dev);
void device_links_force_bind(struct device *dev);
void device_links_driver_bound(struct device *dev);
void device_links_driver_cleanup(struct device *dev);
void device_links_no_driver(struct device *dev);
bool device_links_busy(struct device *dev);
void device_links_unbind_consumers(struct device *dev);
void fw_devlink_drivers_done(void);
void fw_devlink_probing_done(void);
/* device pm support */
void device_pm_move_to_tail(struct device *dev);

View File

@ -84,7 +84,7 @@ done:
return sp;
}
static struct bus_type *bus_get(struct bus_type *bus)
static const struct bus_type *bus_get(const struct bus_type *bus)
{
struct subsys_private *sp = bus_to_subsys(bus);
@ -233,7 +233,7 @@ static const struct kset_uevent_ops bus_uevent_ops = {
static ssize_t unbind_store(struct device_driver *drv, const char *buf,
size_t count)
{
struct bus_type *bus = bus_get(drv->bus);
const struct bus_type *bus = bus_get(drv->bus);
struct device *dev;
int err = -ENODEV;
@ -256,7 +256,7 @@ static DRIVER_ATTR_IGNORE_LOCKDEP(unbind, 0200, NULL, unbind_store);
static ssize_t bind_store(struct device_driver *drv, const char *buf,
size_t count)
{
struct bus_type *bus = bus_get(drv->bus);
const struct bus_type *bus = bus_get(drv->bus);
struct device *dev;
int err = -ENODEV;
@ -274,7 +274,7 @@ static ssize_t bind_store(struct device_driver *drv, const char *buf,
}
static DRIVER_ATTR_IGNORE_LOCKDEP(bind, 0200, NULL, bind_store);
static ssize_t drivers_autoprobe_show(struct bus_type *bus, char *buf)
static ssize_t drivers_autoprobe_show(const struct bus_type *bus, char *buf)
{
struct subsys_private *sp = bus_to_subsys(bus);
int ret;
@ -287,7 +287,7 @@ static ssize_t drivers_autoprobe_show(struct bus_type *bus, char *buf)
return ret;
}
static ssize_t drivers_autoprobe_store(struct bus_type *bus,
static ssize_t drivers_autoprobe_store(const struct bus_type *bus,
const char *buf, size_t count)
{
struct subsys_private *sp = bus_to_subsys(bus);
@ -304,7 +304,7 @@ static ssize_t drivers_autoprobe_store(struct bus_type *bus,
return count;
}
static ssize_t drivers_probe_store(struct bus_type *bus,
static ssize_t drivers_probe_store(const struct bus_type *bus,
const char *buf, size_t count)
{
struct device *dev;
@ -769,7 +769,7 @@ static int __must_check bus_rescan_devices_helper(struct device *dev,
* attached and rescan it against existing drivers to see if it matches
* any by calling device_attach() for the unbound devices.
*/
int bus_rescan_devices(struct bus_type *bus)
int bus_rescan_devices(const struct bus_type *bus)
{
return bus_for_each_dev(bus, NULL, NULL, bus_rescan_devices_helper);
}
@ -808,7 +808,7 @@ static void klist_devices_put(struct klist_node *n)
put_device(dev);
}
static ssize_t bus_uevent_store(struct bus_type *bus,
static ssize_t bus_uevent_store(const struct bus_type *bus,
const char *buf, size_t count)
{
struct subsys_private *sp = bus_to_subsys(bus);
@ -841,7 +841,7 @@ static struct bus_attribute bus_attr_uevent = __ATTR(uevent, 0200, NULL,
* infrastructure, then register the children subsystems it has:
* the devices and drivers that belong to the subsystem.
*/
int bus_register(struct bus_type *bus)
int bus_register(const struct bus_type *bus)
{
int retval;
struct subsys_private *priv;
@ -935,8 +935,8 @@ void bus_unregister(const struct bus_type *bus)
return;
pr_debug("bus: '%s': unregistering\n", bus->name);
if (bus->dev_root)
device_unregister(bus->dev_root);
if (sp->dev_root)
device_unregister(sp->dev_root);
bus_kobj = &sp->subsys.kobj;
sysfs_remove_groups(bus_kobj, bus->bus_groups);
@ -1198,6 +1198,7 @@ static int subsys_register(struct bus_type *subsys,
const struct attribute_group **groups,
struct kobject *parent_of_root)
{
struct subsys_private *sp;
struct device *dev;
int err;
@ -1205,6 +1206,12 @@ static int subsys_register(struct bus_type *subsys,
if (err < 0)
return err;
sp = bus_to_subsys(subsys);
if (!sp) {
err = -EINVAL;
goto err_sp;
}
dev = kzalloc(sizeof(struct device), GFP_KERNEL);
if (!dev) {
err = -ENOMEM;
@ -1223,7 +1230,8 @@ static int subsys_register(struct bus_type *subsys,
if (err < 0)
goto err_dev_reg;
subsys->dev_root = dev;
sp->dev_root = dev;
subsys_put(sp);
return 0;
err_dev_reg:
@ -1232,6 +1240,8 @@ err_dev_reg:
err_name:
kfree(dev);
err_dev:
subsys_put(sp);
err_sp:
bus_unregister(subsys);
return err;
}
@ -1297,7 +1307,7 @@ EXPORT_SYMBOL_GPL(subsys_virtual_register);
* from being unregistered or unloaded while the caller is using it.
* The caller is responsible for preventing this.
*/
struct device_driver *driver_find(const char *name, struct bus_type *bus)
struct device_driver *driver_find(const char *name, const struct bus_type *bus)
{
struct subsys_private *sp = bus_to_subsys(bus);
struct kobject *k;
@ -1349,9 +1359,15 @@ bool bus_is_registered(const struct bus_type *bus)
*/
struct device *bus_get_dev_root(const struct bus_type *bus)
{
if (bus)
return get_device(bus->dev_root);
return NULL;
struct subsys_private *sp = bus_to_subsys(bus);
struct device *dev_root;
if (!sp)
return NULL;
dev_root = get_device(sp->dev_root);
subsys_put(sp);
return dev_root;
}
EXPORT_SYMBOL_GPL(bus_get_dev_root);

View File

@ -28,6 +28,9 @@ static DEFINE_PER_CPU(struct cpu_cacheinfo, ci_cpu_cacheinfo);
#define per_cpu_cacheinfo_idx(cpu, idx) \
(per_cpu_cacheinfo(cpu) + (idx))
/* Set if no cache information is found in DT/ACPI. */
static bool use_arch_info;
struct cpu_cacheinfo *get_cpu_cacheinfo(unsigned int cpu)
{
return ci_cacheinfo(cpu);
@ -38,11 +41,11 @@ static inline bool cache_leaves_are_shared(struct cacheinfo *this_leaf,
{
/*
* For non DT/ACPI systems, assume unique level 1 caches,
* system-wide shared caches for all other levels. This will be used
* only if arch specific code has not populated shared_cpu_map
* system-wide shared caches for all other levels.
*/
if (!(IS_ENABLED(CONFIG_OF) || IS_ENABLED(CONFIG_ACPI)))
return !(this_leaf->level == 1);
if (!(IS_ENABLED(CONFIG_OF) || IS_ENABLED(CONFIG_ACPI)) ||
use_arch_info)
return (this_leaf->level != 1) && (sib_leaf->level != 1);
if ((sib_leaf->attributes & CACHE_ID) &&
(this_leaf->attributes & CACHE_ID))
@ -79,6 +82,9 @@ bool last_level_cache_is_shared(unsigned int cpu_x, unsigned int cpu_y)
}
#ifdef CONFIG_OF
static bool of_check_cache_nodes(struct device_node *np);
/* OF properties to query for a given cache type */
struct cache_type_info {
const char *size_prop;
@ -206,6 +212,11 @@ static int cache_setup_of_node(unsigned int cpu)
return -ENOENT;
}
if (!of_check_cache_nodes(np)) {
of_node_put(np);
return -ENOENT;
}
prev = np;
while (index < cache_leaves(cpu)) {
@ -230,6 +241,25 @@ static int cache_setup_of_node(unsigned int cpu)
return 0;
}
static bool of_check_cache_nodes(struct device_node *np)
{
struct device_node *next;
if (of_property_present(np, "cache-size") ||
of_property_present(np, "i-cache-size") ||
of_property_present(np, "d-cache-size") ||
of_property_present(np, "cache-unified"))
return true;
next = of_find_next_cache_node(np);
if (next) {
of_node_put(next);
return true;
}
return false;
}
static int of_count_cache_leaves(struct device_node *np)
{
unsigned int leaves = 0;
@ -261,6 +291,11 @@ int init_of_cache_level(unsigned int cpu)
struct device_node *prev = NULL;
unsigned int levels = 0, leaves, level;
if (!of_check_cache_nodes(np)) {
of_node_put(np);
return -ENOENT;
}
leaves = of_count_cache_leaves(np);
if (leaves > 0)
levels = 1;
@ -312,6 +347,10 @@ static int cache_setup_properties(unsigned int cpu)
else if (!acpi_disabled)
ret = cache_setup_acpi(cpu);
// Assume there is no cache information available in DT/ACPI from now.
if (ret && use_arch_cache_info())
use_arch_info = true;
return ret;
}
@ -330,7 +369,7 @@ static int cache_shared_cpu_map_setup(unsigned int cpu)
* to update the shared cpu_map if the cache attributes were
* populated early before all the cpus are brought online
*/
if (!last_level_cache_is_valid(cpu)) {
if (!last_level_cache_is_valid(cpu) && !use_arch_info) {
ret = cache_setup_properties(cpu);
if (ret)
return ret;
@ -398,6 +437,11 @@ static void free_cache_attributes(unsigned int cpu)
cache_shared_cpu_map_remove(cpu);
}
int __weak early_cache_level(unsigned int cpu)
{
return -ENOENT;
}
int __weak init_cache_level(unsigned int cpu)
{
return -ENOENT;
@ -423,32 +467,71 @@ int allocate_cache_info(int cpu)
int fetch_cache_info(unsigned int cpu)
{
struct cpu_cacheinfo *this_cpu_ci;
struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
unsigned int levels = 0, split_levels = 0;
int ret;
if (acpi_disabled) {
ret = init_of_cache_level(cpu);
if (ret < 0)
return ret;
} else {
ret = acpi_get_cache_info(cpu, &levels, &split_levels);
if (ret < 0)
if (!ret) {
this_cpu_ci->num_levels = levels;
/*
* This assumes that:
* - there cannot be any split caches (data/instruction)
* above a unified cache
* - data/instruction caches come by pair
*/
this_cpu_ci->num_leaves = levels + split_levels;
}
}
if (ret || !cache_leaves(cpu)) {
ret = early_cache_level(cpu);
if (ret)
return ret;
this_cpu_ci = get_cpu_cacheinfo(cpu);
this_cpu_ci->num_levels = levels;
/*
* This assumes that:
* - there cannot be any split caches (data/instruction)
* above a unified cache
* - data/instruction caches come by pair
*/
this_cpu_ci->num_leaves = levels + split_levels;
if (!cache_leaves(cpu))
return -ENOENT;
this_cpu_ci->early_ci_levels = true;
}
if (!cache_leaves(cpu))
return allocate_cache_info(cpu);
}
static inline int init_level_allocate_ci(unsigned int cpu)
{
unsigned int early_leaves = cache_leaves(cpu);
/* Since early initialization/allocation of the cacheinfo is allowed
* via fetch_cache_info() and this also gets called as CPU hotplug
* callbacks via cacheinfo_cpu_online, the init/alloc can be skipped
* as it will happen only once (the cacheinfo memory is never freed).
* Just populate the cacheinfo. However, if the cacheinfo has been
* allocated early through the arch-specific early_cache_level() call,
* there is a chance the info is wrong (this can happen on arm64). In
* that case, call init_cache_level() anyway to give the arch-specific
* code a chance to make things right.
*/
if (per_cpu_cacheinfo(cpu) && !ci_cacheinfo(cpu)->early_ci_levels)
return 0;
if (init_cache_level(cpu) || !cache_leaves(cpu))
return -ENOENT;
/*
* Now that we have properly initialized the cache level info, make
* sure we don't try to do that again the next time we are called
* (e.g. as CPU hotplug callbacks).
*/
ci_cacheinfo(cpu)->early_ci_levels = false;
if (cache_leaves(cpu) <= early_leaves)
return 0;
kfree(per_cpu_cacheinfo(cpu));
return allocate_cache_info(cpu);
}
@ -456,23 +539,10 @@ int detect_cache_attributes(unsigned int cpu)
{
int ret;
/* Since early initialization/allocation of the cacheinfo is allowed
* via fetch_cache_info() and this also gets called as CPU hotplug
* callbacks via cacheinfo_cpu_online, the init/alloc can be skipped
* as it will happen only once (the cacheinfo memory is never freed).
* Just populate the cacheinfo.
*/
if (per_cpu_cacheinfo(cpu))
goto populate_leaves;
if (init_cache_level(cpu) || !cache_leaves(cpu))
return -ENOENT;
ret = allocate_cache_info(cpu);
ret = init_level_allocate_ci(cpu);
if (ret)
return ret;
populate_leaves:
/*
* If LLC is valid the cache leaves were already populated so just go to
* update the cpu map.

View File

@ -20,8 +20,52 @@
#include <linux/mutex.h>
#include "base.h"
/* /sys/class */
static struct kset *class_kset;
#define to_class_attr(_attr) container_of(_attr, struct class_attribute, attr)
/**
* class_to_subsys - Turn a struct class into a struct subsys_private
*
* @class: pointer to the struct bus_type to look up
*
* The driver core internals need to work on the subsys_private structure, not
* the external struct class pointer. This function walks the list of
* registered classes in the system and finds the matching one and returns the
* internal struct subsys_private that relates to that class.
*
* Note, the reference count of the return value is INCREMENTED if it is not
* NULL. A call to subsys_put() must be done when finished with the pointer in
* order for it to be properly freed.
*/
struct subsys_private *class_to_subsys(const struct class *class)
{
struct subsys_private *sp = NULL;
struct kobject *kobj;
if (!class || !class_kset)
return NULL;
spin_lock(&class_kset->list_lock);
if (list_empty(&class_kset->list))
goto done;
list_for_each_entry(kobj, &class_kset->list, entry) {
struct kset *kset = container_of(kobj, struct kset, kobj);
sp = container_of_const(kset, struct subsys_private, subsys);
if (sp->class == class)
goto done;
}
sp = NULL;
done:
sp = subsys_get(sp);
spin_unlock(&class_kset->list_lock);
return sp;
}
static ssize_t class_attr_show(struct kobject *kobj, struct attribute *attr,
char *buf)
{
@ -49,25 +93,24 @@ static ssize_t class_attr_store(struct kobject *kobj, struct attribute *attr,
static void class_release(struct kobject *kobj)
{
struct subsys_private *cp = to_subsys_private(kobj);
struct class *class = cp->class;
const struct class *class = cp->class;
pr_debug("class '%s': release.\n", class->name);
class->p = NULL;
if (class->class_release)
class->class_release(class);
else
pr_debug("class '%s' does not have a release() function, "
"be careful\n", class->name);
lockdep_unregister_key(&cp->lock_key);
kfree(cp);
}
static const struct kobj_ns_type_operations *class_child_ns_type(const struct kobject *kobj)
{
const struct subsys_private *cp = to_subsys_private(kobj);
struct class *class = cp->class;
const struct class *class = cp->class;
return class->ns_type;
}
@ -83,45 +126,35 @@ static const struct kobj_type class_ktype = {
.child_ns_type = class_child_ns_type,
};
/* Hotplug events for classes go to the class subsys */
static struct kset *class_kset;
int class_create_file_ns(struct class *cls, const struct class_attribute *attr,
int class_create_file_ns(const struct class *cls, const struct class_attribute *attr,
const void *ns)
{
struct subsys_private *sp = class_to_subsys(cls);
int error;
if (cls)
error = sysfs_create_file_ns(&cls->p->subsys.kobj,
&attr->attr, ns);
else
error = -EINVAL;
if (!sp)
return -EINVAL;
error = sysfs_create_file_ns(&sp->subsys.kobj, &attr->attr, ns);
subsys_put(sp);
return error;
}
EXPORT_SYMBOL_GPL(class_create_file_ns);
void class_remove_file_ns(struct class *cls, const struct class_attribute *attr,
void class_remove_file_ns(const struct class *cls, const struct class_attribute *attr,
const void *ns)
{
if (cls)
sysfs_remove_file_ns(&cls->p->subsys.kobj, &attr->attr, ns);
struct subsys_private *sp = class_to_subsys(cls);
if (!sp)
return;
sysfs_remove_file_ns(&sp->subsys.kobj, &attr->attr, ns);
subsys_put(sp);
}
EXPORT_SYMBOL_GPL(class_remove_file_ns);
static struct class *class_get(struct class *cls)
{
if (cls)
kset_get(&cls->p->subsys);
return cls;
}
static void class_put(struct class *cls)
{
if (cls)
kset_put(&cls->p->subsys);
}
static struct device *klist_class_to_dev(struct klist_node *n)
{
struct device_private *p = to_device_private_class(n);
@ -142,21 +175,10 @@ static void klist_class_dev_put(struct klist_node *n)
put_device(dev);
}
static int class_add_groups(struct class *cls,
const struct attribute_group **groups)
{
return sysfs_create_groups(&cls->p->subsys.kobj, groups);
}
static void class_remove_groups(struct class *cls,
const struct attribute_group **groups)
{
return sysfs_remove_groups(&cls->p->subsys.kobj, groups);
}
int __class_register(struct class *cls, struct lock_class_key *key)
int class_register(const struct class *cls)
{
struct subsys_private *cp;
struct lock_class_key *key;
int error;
pr_debug("device class '%s': registering\n", cls->name);
@ -167,6 +189,8 @@ int __class_register(struct class *cls, struct lock_class_key *key)
klist_init(&cp->klist_devices, klist_class_dev_get, klist_class_dev_put);
INIT_LIST_HEAD(&cp->interfaces);
kset_init(&cp->glue_dirs);
key = &cp->lock_key;
lockdep_register_key(key);
__mutex_init(&cp->mutex, "subsys mutex", key);
error = kobject_set_name(&cp->subsys.kobj, "%s", cls->name);
if (error) {
@ -174,27 +198,15 @@ int __class_register(struct class *cls, struct lock_class_key *key)
return error;
}
/* set the default /sys/dev directory for devices of this class */
if (!cls->dev_kobj)
cls->dev_kobj = sysfs_dev_char_kobj;
#if defined(CONFIG_BLOCK)
/* let the block class directory show up in the root of sysfs */
if (!sysfs_deprecated || cls != &block_class)
cp->subsys.kobj.kset = class_kset;
#else
cp->subsys.kobj.kset = class_kset;
#endif
cp->subsys.kobj.ktype = &class_ktype;
cp->class = cls;
cls->p = cp;
error = kset_register(&cp->subsys);
if (error)
goto err_out;
error = class_add_groups(class_get(cls), cls->class_groups);
class_put(cls);
error = sysfs_create_groups(&cp->subsys.kobj, cls->class_groups);
if (error) {
kobject_del(&cp->subsys.kobj);
kfree_const(cp->subsys.kobj.name);
@ -204,30 +216,34 @@ int __class_register(struct class *cls, struct lock_class_key *key)
err_out:
kfree(cp);
cls->p = NULL;
return error;
}
EXPORT_SYMBOL_GPL(__class_register);
EXPORT_SYMBOL_GPL(class_register);
void class_unregister(struct class *cls)
void class_unregister(const struct class *cls)
{
struct subsys_private *sp = class_to_subsys(cls);
if (!sp)
return;
pr_debug("device class '%s': unregistering\n", cls->name);
class_remove_groups(cls, cls->class_groups);
kset_unregister(&cls->p->subsys);
sysfs_remove_groups(&sp->subsys.kobj, cls->class_groups);
kset_unregister(&sp->subsys);
subsys_put(sp);
}
EXPORT_SYMBOL_GPL(class_unregister);
static void class_create_release(struct class *cls)
static void class_create_release(const struct class *cls)
{
pr_debug("%s called for %s\n", __func__, cls->name);
kfree(cls);
}
/**
* __class_create - create a struct class structure
* @owner: pointer to the module that is to "own" this struct class
* class_create - create a struct class structure
* @name: pointer to a string for the name of this class.
* @key: the lock_class_key for this class; used by mutex lock debugging
*
* This is used to create a struct class pointer that can then be used
* in calls to device_create().
@ -237,8 +253,7 @@ static void class_create_release(struct class *cls)
* Note, the pointer created here is to be destroyed when finished by
* making a call to class_destroy().
*/
struct class *__class_create(struct module *owner, const char *name,
struct lock_class_key *key)
struct class *class_create(const char *name)
{
struct class *cls;
int retval;
@ -250,10 +265,9 @@ struct class *__class_create(struct module *owner, const char *name,
}
cls->name = name;
cls->owner = owner;
cls->class_release = class_create_release;
retval = __class_register(cls, key);
retval = class_register(cls);
if (retval)
goto error;
@ -263,7 +277,7 @@ error:
kfree(cls);
return ERR_PTR(retval);
}
EXPORT_SYMBOL_GPL(__class_create);
EXPORT_SYMBOL_GPL(class_create);
/**
* class_destroy - destroys a struct class structure
@ -272,7 +286,7 @@ EXPORT_SYMBOL_GPL(__class_create);
* Note, the pointer to be destroyed must have been created with a call
* to class_create().
*/
void class_destroy(struct class *cls)
void class_destroy(const struct class *cls)
{
if (IS_ERR_OR_NULL(cls))
return;
@ -293,14 +307,18 @@ EXPORT_SYMBOL_GPL(class_destroy);
* otherwise if it is NULL, the iteration starts at the beginning of
* the list.
*/
void class_dev_iter_init(struct class_dev_iter *iter, struct class *class,
struct device *start, const struct device_type *type)
void class_dev_iter_init(struct class_dev_iter *iter, const struct class *class,
const struct device *start, const struct device_type *type)
{
struct subsys_private *sp = class_to_subsys(class);
struct klist_node *start_knode = NULL;
if (!sp)
return;
if (start)
start_knode = &start->p->knode_class;
klist_iter_init_node(&class->p->klist_devices, &iter->ki, start_knode);
klist_iter_init_node(&sp->klist_devices, &iter->ki, start_knode);
iter->type = type;
}
EXPORT_SYMBOL_GPL(class_dev_iter_init);
@ -364,16 +382,17 @@ EXPORT_SYMBOL_GPL(class_dev_iter_exit);
* @fn is allowed to do anything including calling back into class
* code. There's no locking restriction.
*/
int class_for_each_device(struct class *class, struct device *start,
int class_for_each_device(const struct class *class, const struct device *start,
void *data, int (*fn)(struct device *, void *))
{
struct subsys_private *sp = class_to_subsys(class);
struct class_dev_iter iter;
struct device *dev;
int error = 0;
if (!class)
return -EINVAL;
if (!class->p) {
if (!sp) {
WARN(1, "%s called for class '%s' before it was initialized",
__func__, class->name);
return -EINVAL;
@ -386,6 +405,7 @@ int class_for_each_device(struct class *class, struct device *start,
break;
}
class_dev_iter_exit(&iter);
subsys_put(sp);
return error;
}
@ -411,16 +431,17 @@ EXPORT_SYMBOL_GPL(class_for_each_device);
* @match is allowed to do anything including calling back into class
* code. There's no locking restriction.
*/
struct device *class_find_device(struct class *class, struct device *start,
struct device *class_find_device(const struct class *class, const struct device *start,
const void *data,
int (*match)(struct device *, const void *))
{
struct subsys_private *sp = class_to_subsys(class);
struct class_dev_iter iter;
struct device *dev;
if (!class)
return NULL;
if (!class->p) {
if (!sp) {
WARN(1, "%s called for class '%s' before it was initialized",
__func__, class->name);
return NULL;
@ -434,6 +455,7 @@ struct device *class_find_device(struct class *class, struct device *start,
}
}
class_dev_iter_exit(&iter);
subsys_put(sp);
return dev;
}
@ -441,26 +463,33 @@ EXPORT_SYMBOL_GPL(class_find_device);
int class_interface_register(struct class_interface *class_intf)
{
struct class *parent;
struct subsys_private *sp;
const struct class *parent;
struct class_dev_iter iter;
struct device *dev;
if (!class_intf || !class_intf->class)
return -ENODEV;
parent = class_get(class_intf->class);
if (!parent)
parent = class_intf->class;
sp = class_to_subsys(parent);
if (!sp)
return -EINVAL;
mutex_lock(&parent->p->mutex);
list_add_tail(&class_intf->node, &parent->p->interfaces);
/*
* Reference in sp is now incremented and will be dropped when
* the interface is removed in the call to class_interface_unregister()
*/
mutex_lock(&sp->mutex);
list_add_tail(&class_intf->node, &sp->interfaces);
if (class_intf->add_dev) {
class_dev_iter_init(&iter, parent, NULL, NULL);
while ((dev = class_dev_iter_next(&iter)))
class_intf->add_dev(dev, class_intf);
class_intf->add_dev(dev);
class_dev_iter_exit(&iter);
}
mutex_unlock(&parent->p->mutex);
mutex_unlock(&sp->mutex);
return 0;
}
@ -468,29 +497,40 @@ EXPORT_SYMBOL_GPL(class_interface_register);
void class_interface_unregister(struct class_interface *class_intf)
{
struct class *parent = class_intf->class;
struct subsys_private *sp;
const struct class *parent = class_intf->class;
struct class_dev_iter iter;
struct device *dev;
if (!parent)
return;
mutex_lock(&parent->p->mutex);
sp = class_to_subsys(parent);
if (!sp)
return;
mutex_lock(&sp->mutex);
list_del_init(&class_intf->node);
if (class_intf->remove_dev) {
class_dev_iter_init(&iter, parent, NULL, NULL);
while ((dev = class_dev_iter_next(&iter)))
class_intf->remove_dev(dev, class_intf);
class_intf->remove_dev(dev);
class_dev_iter_exit(&iter);
}
mutex_unlock(&parent->p->mutex);
mutex_unlock(&sp->mutex);
class_put(parent);
/*
* Decrement the reference count twice, once for the class_to_subsys()
* call in the start of this function, and the second one from the
* reference increment in class_interface_register()
*/
subsys_put(sp);
subsys_put(sp);
}
EXPORT_SYMBOL_GPL(class_interface_unregister);
ssize_t show_class_attr_string(struct class *class,
struct class_attribute *attr, char *buf)
ssize_t show_class_attr_string(const struct class *class,
const struct class_attribute *attr, char *buf)
{
struct class_attribute_string *cs;
@ -587,6 +627,31 @@ void class_compat_remove_link(struct class_compat *cls, struct device *dev,
}
EXPORT_SYMBOL_GPL(class_compat_remove_link);
/**
* class_is_registered - determine if at this moment in time, a class is
* registered in the driver core or not.
* @class: the class to check
*
* Returns a boolean to state if the class is registered in the driver core
* or not. Note that the value could switch right after this call is made,
* so only use this in places where you "know" it is safe to do so (usually
* to determine if the specific class has been registered yet or not).
*
* Be careful in using this.
*/
bool class_is_registered(const struct class *class)
{
struct subsys_private *sp = class_to_subsys(class);
bool is_initialized = false;
if (sp) {
is_initialized = true;
subsys_put(sp);
}
return is_initialized;
}
EXPORT_SYMBOL_GPL(class_is_registered);
int __init classes_init(void)
{
class_kset = kset_create_and_add("class", NULL, NULL);

View File

@ -36,19 +36,6 @@
#include "physical_location.h"
#include "power/power.h"
#ifdef CONFIG_SYSFS_DEPRECATED
#ifdef CONFIG_SYSFS_DEPRECATED_V2
long sysfs_deprecated = 1;
#else
long sysfs_deprecated = 0;
#endif
static int __init sysfs_deprecated_setup(char *arg)
{
return kstrtol(arg, 10, &sysfs_deprecated);
}
early_param("sysfs.deprecated", sysfs_deprecated_setup);
#endif
/* Device links support. */
static LIST_HEAD(deferred_sync);
static unsigned int defer_sync_state_count = 1;
@ -550,13 +537,11 @@ static void devlink_dev_release(struct device *dev)
static struct class devlink_class = {
.name = "devlink",
.owner = THIS_MODULE,
.dev_groups = devlink_groups,
.dev_release = devlink_dev_release,
};
static int devlink_add_symlinks(struct device *dev,
struct class_interface *class_intf)
static int devlink_add_symlinks(struct device *dev)
{
int ret;
size_t len;
@ -605,8 +590,7 @@ out:
return ret;
}
static void devlink_remove_symlinks(struct device *dev,
struct class_interface *class_intf)
static void devlink_remove_symlinks(struct device *dev)
{
struct device_link *link = to_devlink(dev);
size_t len;
@ -1173,10 +1157,7 @@ static void device_links_flush_sync_list(struct list_head *list,
if (dev != dont_lock_dev)
device_lock(dev);
if (dev->bus->sync_state)
dev->bus->sync_state(dev);
else if (dev->driver && dev->driver->sync_state)
dev->driver->sync_state(dev);
dev_sync_state(dev);
if (dev != dont_lock_dev)
device_unlock(dev);
@ -1685,6 +1666,31 @@ static int __init fw_devlink_strict_setup(char *arg)
}
early_param("fw_devlink.strict", fw_devlink_strict_setup);
#define FW_DEVLINK_SYNC_STATE_STRICT 0
#define FW_DEVLINK_SYNC_STATE_TIMEOUT 1
#ifndef CONFIG_FW_DEVLINK_SYNC_STATE_TIMEOUT
static int fw_devlink_sync_state;
#else
static int fw_devlink_sync_state = FW_DEVLINK_SYNC_STATE_TIMEOUT;
#endif
static int __init fw_devlink_sync_state_setup(char *arg)
{
if (!arg)
return -EINVAL;
if (strcmp(arg, "strict") == 0) {
fw_devlink_sync_state = FW_DEVLINK_SYNC_STATE_STRICT;
return 0;
} else if (strcmp(arg, "timeout") == 0) {
fw_devlink_sync_state = FW_DEVLINK_SYNC_STATE_TIMEOUT;
return 0;
}
return -EINVAL;
}
early_param("fw_devlink.sync_state", fw_devlink_sync_state_setup);
static inline u32 fw_devlink_get_flags(u8 fwlink_flags)
{
if (fwlink_flags & FWLINK_FLAG_CYCLE)
@ -1755,6 +1761,44 @@ void fw_devlink_drivers_done(void)
device_links_write_unlock();
}
static int fw_devlink_dev_sync_state(struct device *dev, void *data)
{
struct device_link *link = to_devlink(dev);
struct device *sup = link->supplier;
if (!(link->flags & DL_FLAG_MANAGED) ||
link->status == DL_STATE_ACTIVE || sup->state_synced ||
!dev_has_sync_state(sup))
return 0;
if (fw_devlink_sync_state == FW_DEVLINK_SYNC_STATE_STRICT) {
dev_warn(sup, "sync_state() pending due to %s\n",
dev_name(link->consumer));
return 0;
}
if (!list_empty(&sup->links.defer_sync))
return 0;
dev_warn(sup, "Timed out. Forcing sync_state()\n");
sup->state_synced = true;
get_device(sup);
list_add_tail(&sup->links.defer_sync, data);
return 0;
}
void fw_devlink_probing_done(void)
{
LIST_HEAD(sync_list);
device_links_write_lock();
class_for_each_device(&devlink_class, NULL, &sync_list,
fw_devlink_dev_sync_state);
device_links_write_unlock();
device_links_flush_sync_list(&sync_list, NULL);
}
/**
* wait_for_init_devices_probe - Try to probe any device needed for init
*
@ -2209,8 +2253,12 @@ static void fw_devlink_link_device(struct device *dev)
int (*platform_notify)(struct device *dev) = NULL;
int (*platform_notify_remove)(struct device *dev) = NULL;
static struct kobject *dev_kobj;
struct kobject *sysfs_dev_char_kobj;
struct kobject *sysfs_dev_block_kobj;
/* /sys/dev/char */
static struct kobject *sysfs_dev_char_kobj;
/* /sys/dev/block */
static struct kobject *sysfs_dev_block_kobj;
static DEFINE_MUTEX(device_hotplug_lock);
@ -2779,7 +2827,7 @@ EXPORT_SYMBOL_GPL(devm_device_add_groups);
static int device_add_attrs(struct device *dev)
{
struct class *class = dev->class;
const struct class *class = dev->class;
const struct device_type *type = dev->type;
int error;
@ -2846,7 +2894,7 @@ static int device_add_attrs(struct device *dev)
static void device_remove_attrs(struct device *dev)
{
struct class *class = dev->class;
const struct class *class = dev->class;
const struct device_type *type = dev->type;
if (dev->physical_location) {
@ -3079,7 +3127,7 @@ struct kobject *virtual_device_parent(struct device *dev)
struct class_dir {
struct kobject kobj;
struct class *class;
const struct class *class;
};
#define to_class_dir(obj) container_of(obj, struct class_dir, kobj)
@ -3103,8 +3151,8 @@ static const struct kobj_type class_dir_ktype = {
.child_ns_type = class_dir_child_ns_type
};
static struct kobject *
class_dir_create_and_add(struct class *class, struct kobject *parent_kobj)
static struct kobject *class_dir_create_and_add(struct subsys_private *sp,
struct kobject *parent_kobj)
{
struct class_dir *dir;
int retval;
@ -3113,12 +3161,12 @@ class_dir_create_and_add(struct class *class, struct kobject *parent_kobj)
if (!dir)
return ERR_PTR(-ENOMEM);
dir->class = class;
dir->class = sp->class;
kobject_init(&dir->kobj, &class_dir_ktype);
dir->kobj.kset = &class->p->glue_dirs;
dir->kobj.kset = &sp->glue_dirs;
retval = kobject_add(&dir->kobj, parent_kobj, "%s", class->name);
retval = kobject_add(&dir->kobj, parent_kobj, "%s", sp->class->name);
if (retval < 0) {
kobject_put(&dir->kobj);
return ERR_PTR(retval);
@ -3131,21 +3179,13 @@ static DEFINE_MUTEX(gdp_mutex);
static struct kobject *get_device_parent(struct device *dev,
struct device *parent)
{
struct subsys_private *sp = class_to_subsys(dev->class);
struct kobject *kobj = NULL;
if (dev->class) {
if (sp) {
struct kobject *parent_kobj;
struct kobject *k;
#ifdef CONFIG_BLOCK
/* block disks show up in /sys/block */
if (sysfs_deprecated && dev->class == &block_class) {
if (parent && parent->class == &block_class)
return &parent->kobj;
return &block_class.p->subsys.kobj;
}
#endif
/*
* If we have no parent, we live in "virtual".
* Class-devices with a non class-device as parent, live
@ -3153,30 +3193,34 @@ static struct kobject *get_device_parent(struct device *dev,
*/
if (parent == NULL)
parent_kobj = virtual_device_parent(dev);
else if (parent->class && !dev->class->ns_type)
else if (parent->class && !dev->class->ns_type) {
subsys_put(sp);
return &parent->kobj;
else
} else {
parent_kobj = &parent->kobj;
}
mutex_lock(&gdp_mutex);
/* find our class-directory at the parent and reference it */
spin_lock(&dev->class->p->glue_dirs.list_lock);
list_for_each_entry(k, &dev->class->p->glue_dirs.list, entry)
spin_lock(&sp->glue_dirs.list_lock);
list_for_each_entry(k, &sp->glue_dirs.list, entry)
if (k->parent == parent_kobj) {
kobj = kobject_get(k);
break;
}
spin_unlock(&dev->class->p->glue_dirs.list_lock);
spin_unlock(&sp->glue_dirs.list_lock);
if (kobj) {
mutex_unlock(&gdp_mutex);
subsys_put(sp);
return kobj;
}
/* or create a new class-directory at the parent device */
k = class_dir_create_and_add(dev->class, parent_kobj);
k = class_dir_create_and_add(sp, parent_kobj);
/* do not emit an uevent for this simple "glue" directory */
mutex_unlock(&gdp_mutex);
subsys_put(sp);
return k;
}
@ -3199,10 +3243,23 @@ static struct kobject *get_device_parent(struct device *dev,
static inline bool live_in_glue_dir(struct kobject *kobj,
struct device *dev)
{
if (!kobj || !dev->class ||
kobj->kset != &dev->class->p->glue_dirs)
struct subsys_private *sp;
bool retval;
if (!kobj || !dev->class)
return false;
return true;
sp = class_to_subsys(dev->class);
if (!sp)
return false;
if (kobj->kset == &sp->glue_dirs)
retval = true;
else
retval = false;
subsys_put(sp);
return retval;
}
static inline struct kobject *get_glue_dir(struct device *dev)
@ -3299,6 +3356,7 @@ static void cleanup_glue_dir(struct device *dev, struct kobject *glue_dir)
static int device_add_class_symlinks(struct device *dev)
{
struct device_node *of_node = dev_of_node(dev);
struct subsys_private *sp;
int error;
if (of_node) {
@ -3308,12 +3366,11 @@ static int device_add_class_symlinks(struct device *dev)
/* An error here doesn't warrant bringing down the device */
}
if (!dev->class)
sp = class_to_subsys(dev->class);
if (!sp)
return 0;
error = sysfs_create_link(&dev->kobj,
&dev->class->p->subsys.kobj,
"subsystem");
error = sysfs_create_link(&dev->kobj, &sp->subsys.kobj, "subsystem");
if (error)
goto out_devnode;
@ -3324,46 +3381,38 @@ static int device_add_class_symlinks(struct device *dev)
goto out_subsys;
}
#ifdef CONFIG_BLOCK
/* /sys/block has directories and does not need symlinks */
if (sysfs_deprecated && dev->class == &block_class)
return 0;
#endif
/* link in the class directory pointing to the device */
error = sysfs_create_link(&dev->class->p->subsys.kobj,
&dev->kobj, dev_name(dev));
error = sysfs_create_link(&sp->subsys.kobj, &dev->kobj, dev_name(dev));
if (error)
goto out_device;
return 0;
goto exit;
out_device:
sysfs_remove_link(&dev->kobj, "device");
out_subsys:
sysfs_remove_link(&dev->kobj, "subsystem");
out_devnode:
sysfs_remove_link(&dev->kobj, "of_node");
exit:
subsys_put(sp);
return error;
}
static void device_remove_class_symlinks(struct device *dev)
{
struct subsys_private *sp = class_to_subsys(dev->class);
if (dev_of_node(dev))
sysfs_remove_link(&dev->kobj, "of_node");
if (!dev->class)
if (!sp)
return;
if (dev->parent && device_is_not_partition(dev))
sysfs_remove_link(&dev->kobj, "device");
sysfs_remove_link(&dev->kobj, "subsystem");
#ifdef CONFIG_BLOCK
if (sysfs_deprecated && dev->class == &block_class)
return;
#endif
sysfs_delete_link(&dev->class->p->subsys.kobj, &dev->kobj, dev_name(dev));
sysfs_delete_link(&sp->subsys.kobj, &dev->kobj, dev_name(dev));
subsys_put(sp);
}
/**
@ -3383,27 +3432,13 @@ int dev_set_name(struct device *dev, const char *fmt, ...)
}
EXPORT_SYMBOL_GPL(dev_set_name);
/**
* device_to_dev_kobj - select a /sys/dev/ directory for the device
* @dev: device
*
* By default we select char/ for new entries. Setting class->dev_obj
* to NULL prevents an entry from being created. class->dev_kobj must
* be set (or cleared) before any devices are registered to the class
* otherwise device_create_sys_dev_entry() and
* device_remove_sys_dev_entry() will disagree about the presence of
* the link.
*/
/* select a /sys/dev/ directory for the device */
static struct kobject *device_to_dev_kobj(struct device *dev)
{
struct kobject *kobj;
if (dev->class)
kobj = dev->class->dev_kobj;
if (is_blockdev(dev))
return sysfs_dev_block_kobj;
else
kobj = sysfs_dev_char_kobj;
return kobj;
return sysfs_dev_char_kobj;
}
static int device_create_sys_dev_entry(struct device *dev)
@ -3472,6 +3507,7 @@ static int device_private_init(struct device *dev)
*/
int device_add(struct device *dev)
{
struct subsys_private *sp;
struct device *parent;
struct kobject *kobj;
struct class_interface *class_intf;
@ -3600,18 +3636,18 @@ int device_add(struct device *dev)
klist_add_tail(&dev->p->knode_parent,
&parent->p->klist_children);
if (dev->class) {
mutex_lock(&dev->class->p->mutex);
sp = class_to_subsys(dev->class);
if (sp) {
mutex_lock(&sp->mutex);
/* tie the class to the device */
klist_add_tail(&dev->p->knode_class,
&dev->class->p->klist_devices);
klist_add_tail(&dev->p->knode_class, &sp->klist_devices);
/* notify any interfaces that the device is here */
list_for_each_entry(class_intf,
&dev->class->p->interfaces, node)
list_for_each_entry(class_intf, &sp->interfaces, node)
if (class_intf->add_dev)
class_intf->add_dev(dev, class_intf);
mutex_unlock(&dev->class->p->mutex);
class_intf->add_dev(dev);
mutex_unlock(&sp->mutex);
subsys_put(sp);
}
done:
put_device(dev);
@ -3731,6 +3767,7 @@ EXPORT_SYMBOL_GPL(kill_device);
*/
void device_del(struct device *dev)
{
struct subsys_private *sp;
struct device *parent = dev->parent;
struct kobject *glue_dir = NULL;
struct class_interface *class_intf;
@ -3757,18 +3794,20 @@ void device_del(struct device *dev)
device_remove_sys_dev_entry(dev);
device_remove_file(dev, &dev_attr_dev);
}
if (dev->class) {
sp = class_to_subsys(dev->class);
if (sp) {
device_remove_class_symlinks(dev);
mutex_lock(&dev->class->p->mutex);
mutex_lock(&sp->mutex);
/* notify any interfaces that the device is now gone */
list_for_each_entry(class_intf,
&dev->class->p->interfaces, node)
list_for_each_entry(class_intf, &sp->interfaces, node)
if (class_intf->remove_dev)
class_intf->remove_dev(dev, class_intf);
class_intf->remove_dev(dev);
/* remove the device from the class list */
klist_del(&dev->p->knode_class);
mutex_unlock(&dev->class->p->mutex);
mutex_unlock(&sp->mutex);
subsys_put(sp);
}
device_remove_file(dev, &dev_attr_uevent);
device_remove_attrs(dev);
@ -4231,7 +4270,7 @@ static void device_create_release(struct device *dev)
}
static __printf(6, 0) struct device *
device_create_groups_vargs(struct class *class, struct device *parent,
device_create_groups_vargs(const struct class *class, struct device *parent,
dev_t devt, void *drvdata,
const struct attribute_group **groups,
const char *fmt, va_list args)
@ -4291,11 +4330,8 @@ error:
* pointer.
*
* Returns &struct device pointer on success, or ERR_PTR() on error.
*
* Note: the struct class passed to this function must have previously
* been created with a call to class_create().
*/
struct device *device_create(struct class *class, struct device *parent,
struct device *device_create(const struct class *class, struct device *parent,
dev_t devt, void *drvdata, const char *fmt, ...)
{
va_list vargs;
@ -4332,11 +4368,8 @@ EXPORT_SYMBOL_GPL(device_create);
* pointer.
*
* Returns &struct device pointer on success, or ERR_PTR() on error.
*
* Note: the struct class passed to this function must have previously
* been created with a call to class_create().
*/
struct device *device_create_with_groups(struct class *class,
struct device *device_create_with_groups(const struct class *class,
struct device *parent, dev_t devt,
void *drvdata,
const struct attribute_group **groups,
@ -4361,7 +4394,7 @@ EXPORT_SYMBOL_GPL(device_create_with_groups);
* This call unregisters and cleans up a device that was created with a
* call to device_create().
*/
void device_destroy(struct class *class, dev_t devt)
void device_destroy(const struct class *class, dev_t devt)
{
struct device *dev;
@ -4383,9 +4416,12 @@ EXPORT_SYMBOL_GPL(device_destroy);
* on the same device to ensure that new_name is valid and
* won't conflict with other devices.
*
* Note: Don't call this function. Currently, the networking layer calls this
* function, but that will change. The following text from Kay Sievers offers
* some insight:
* Note: given that some subsystems (networking and infiniband) use this
* function, with no immediate plans for this to change, we cannot assume or
* require that this function not be called at all.
*
* However, if you're writing new code, do not call this function. The following
* text from Kay Sievers offers some insight:
*
* Renaming devices is racy at many levels, symlinks and other stuff are not
* replaced atomically, and you get a "move" uevent, but it's not easy to
@ -4399,13 +4435,6 @@ EXPORT_SYMBOL_GPL(device_destroy);
* kernel device renaming. Besides that, it's not even implemented now for
* other things than (driver-core wise very simple) network devices.
*
* We are currently about to change network renaming in udev to completely
* disallow renaming of devices in the same namespace as the kernel uses,
* because we can't solve the problems properly, that arise with swapping names
* of multiple interfaces without races. Means, renaming of eth[0-9]* will only
* be allowed to some other name than eth[0-9]*, for the aforementioned
* reasons.
*
* Make up a "real" name in the driver before you register anything, or add
* some other attributes for userspace to find the device, or use udev to add
* symlinks -- but never rename kernel devices later, it's a complete mess. We
@ -4431,9 +4460,16 @@ int device_rename(struct device *dev, const char *new_name)
}
if (dev->class) {
error = sysfs_rename_link_ns(&dev->class->p->subsys.kobj,
kobj, old_device_name,
struct subsys_private *sp = class_to_subsys(dev->class);
if (!sp) {
error = -EINVAL;
goto out;
}
error = sysfs_rename_link_ns(&sp->subsys.kobj, kobj, old_device_name,
new_name, kobject_namespace(kobj));
subsys_put(sp);
if (error)
goto out;
}
@ -4558,7 +4594,7 @@ static int device_attrs_change_owner(struct device *dev, kuid_t kuid,
kgid_t kgid)
{
struct kobject *kobj = &dev->kobj;
struct class *class = dev->class;
const struct class *class = dev->class;
const struct device_type *type = dev->type;
int error;
@ -4616,6 +4652,7 @@ int device_change_owner(struct device *dev, kuid_t kuid, kgid_t kgid)
{
int error;
struct kobject *kobj = &dev->kobj;
struct subsys_private *sp;
dev = get_device(dev);
if (!dev)
@ -4652,21 +4689,19 @@ int device_change_owner(struct device *dev, kuid_t kuid, kgid_t kgid)
if (error)
goto out;
#ifdef CONFIG_BLOCK
if (sysfs_deprecated && dev->class == &block_class)
goto out;
#endif
/*
* Change the owner of the symlink located in the class directory of
* the device class associated with @dev which points to the actual
* directory entry for @dev to @kuid/@kgid. This ensures that the
* symlink shows the same permissions as its target.
*/
error = sysfs_link_change_owner(&dev->class->p->subsys.kobj, &dev->kobj,
dev_name(dev), kuid, kgid);
if (error)
sp = class_to_subsys(dev->class);
if (!sp) {
error = -EINVAL;
goto out;
}
error = sysfs_link_change_owner(&sp->subsys.kobj, &dev->kobj, dev_name(dev), kuid, kgid);
subsys_put(sp);
out:
put_device(dev);
@ -4965,9 +5000,13 @@ void set_primary_fwnode(struct device *dev, struct fwnode_handle *fwnode)
} else {
if (fwnode_is_primary(fn)) {
dev->fwnode = fn->secondary;
/* Skip nullifying fn->secondary if the primary is shared */
if (parent && fn == parent->fwnode)
return;
/* Set fn->secondary = NULL, so fn remains the primary fwnode */
if (!(parent && fn == parent->fwnode))
fn->secondary = NULL;
fn->secondary = NULL;
} else {
dev->fwnode = NULL;
}

View File

@ -315,6 +315,8 @@ static void deferred_probe_timeout_work_func(struct work_struct *work)
list_for_each_entry(p, &deferred_probe_pending_list, deferred_probe)
dev_info(p->device, "deferred probe pending\n");
mutex_unlock(&deferred_probe_mutex);
fw_devlink_probing_done();
}
static DECLARE_DELAYED_WORK(deferred_probe_timeout_work, deferred_probe_timeout_work_func);
@ -364,6 +366,10 @@ static int deferred_probe_initcall(void)
schedule_delayed_work(&deferred_probe_timeout_work,
driver_deferred_probe_timeout * HZ);
}
if (!IS_ENABLED(CONFIG_MODULES))
fw_devlink_probing_done();
return 0;
}
late_initcall(deferred_probe_initcall);
@ -504,6 +510,27 @@ EXPORT_SYMBOL_GPL(device_bind_driver);
static atomic_t probe_count = ATOMIC_INIT(0);
static DECLARE_WAIT_QUEUE_HEAD(probe_waitqueue);
static ssize_t state_synced_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
int ret = 0;
if (strcmp("1", buf))
return -EINVAL;
device_lock(dev);
if (!dev->state_synced) {
dev->state_synced = true;
dev_sync_state(dev);
} else {
ret = -EINVAL;
}
device_unlock(dev);
return ret ? ret : count;
}
static ssize_t state_synced_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
@ -515,7 +542,7 @@ static ssize_t state_synced_show(struct device *dev,
return sysfs_emit(buf, "%u\n", val);
}
static DEVICE_ATTR_RO(state_synced);
static DEVICE_ATTR_RW(state_synced);
static void device_unbind_cleanup(struct device *dev)
{
@ -708,7 +735,12 @@ static int really_probe_debug(struct device *dev, struct device_driver *drv)
calltime = ktime_get();
ret = really_probe(dev, drv);
rettime = ktime_get();
pr_debug("probe of %s returned %d after %lld usecs\n",
/*
* Don't change this to pr_debug() because that requires
* CONFIG_DYNAMIC_DEBUG and we want a simple 'initcall_debug' on the
* kernel commandline to print this all the time at the debug level.
*/
printk(KERN_DEBUG "probe of %s returned %d after %lld usecs\n",
dev_name(dev), ret, ktime_us_delta(rettime, calltime));
return ret;
}

View File

@ -167,7 +167,7 @@ static int devcd_free(struct device *dev, void *data)
return 0;
}
static ssize_t disabled_show(struct class *class, struct class_attribute *attr,
static ssize_t disabled_show(const struct class *class, const struct class_attribute *attr,
char *buf)
{
return sysfs_emit(buf, "%d\n", devcd_disabled);
@ -197,7 +197,7 @@ static ssize_t disabled_show(struct class *class, struct class_attribute *attr,
* so, above situation would not occur.
*/
static ssize_t disabled_store(struct class *class, struct class_attribute *attr,
static ssize_t disabled_store(const struct class *class, const struct class_attribute *attr,
const char *buf, size_t count)
{
long tmp = simple_strtol(buf, NULL, 10);
@ -226,7 +226,6 @@ ATTRIBUTE_GROUPS(devcd_class);
static struct class devcd_class = {
.name = "devcoredump",
.owner = THIS_MODULE,
.dev_release = devcd_dev_release,
.dev_groups = devcd_dev_groups,
.class_groups = devcd_class_groups,

View File

@ -722,20 +722,21 @@ static void devm_action_release(struct device *dev, void *res)
}
/**
* devm_add_action() - add a custom action to list of managed resources
* __devm_add_action() - add a custom action to list of managed resources
* @dev: Device that owns the action
* @action: Function that should be called
* @data: Pointer to data passed to @action implementation
* @name: Name of the resource (for debugging purposes)
*
* This adds a custom action to the list of managed resources so that
* it gets executed as part of standard resource unwinding.
*/
int devm_add_action(struct device *dev, void (*action)(void *), void *data)
int __devm_add_action(struct device *dev, void (*action)(void *), void *data, const char *name)
{
struct action_devres *devres;
devres = devres_alloc(devm_action_release,
sizeof(struct action_devres), GFP_KERNEL);
devres = __devres_alloc_node(devm_action_release, sizeof(struct action_devres),
GFP_KERNEL, NUMA_NO_NODE, name);
if (!devres)
return -ENOMEM;
@ -745,7 +746,7 @@ int devm_add_action(struct device *dev, void (*action)(void *), void *data)
devres_add(dev, devres);
return 0;
}
EXPORT_SYMBOL_GPL(devm_add_action);
EXPORT_SYMBOL_GPL(__devm_add_action);
/**
* devm_remove_action() - removes previously added custom action

View File

@ -94,15 +94,6 @@ static struct file_system_type dev_fs_type = {
.mount = public_dev_mount,
};
#ifdef CONFIG_BLOCK
static inline int is_blockdev(struct device *dev)
{
return dev->class == &block_class;
}
#else
static inline int is_blockdev(struct device *dev) { return 0; }
#endif
static int devtmpfs_submit_req(struct req *req, const char *tmp)
{
init_completion(&req->done);

View File

@ -3,6 +3,8 @@ menu "Firmware loader"
config FW_LOADER
tristate "Firmware loading facility" if EXPERT
select CRYPTO_HASH if FW_LOADER_DEBUG
select CRYPTO_SHA256 if FW_LOADER_DEBUG
default y
help
This enables the firmware loading facility in the kernel. The kernel
@ -24,6 +26,17 @@ config FW_LOADER
You also want to be sure to enable this built-in if you are going to
enable built-in firmware (CONFIG_EXTRA_FIRMWARE).
config FW_LOADER_DEBUG
bool "Log filenames and checksums for loaded firmware"
depends on CRYPTO = FW_LOADER || CRYPTO=y
depends on DYNAMIC_DEBUG
depends on FW_LOADER
default FW_LOADER
help
Select this option to use dynamic debug to log firmware filenames and
SHA256 checksums to the kernel log for each firmware file that is
loaded.
if FW_LOADER
config FW_LOADER_PAGED_BUF

View File

@ -493,9 +493,9 @@ fw_get_filesystem_firmware(struct device *device, struct fw_priv *fw_priv,
const void *in_buffer))
{
size_t size;
int i, len;
int i, len, maxlen = 0;
int rc = -ENOENT;
char *path;
char *path, *nt = NULL;
size_t msize = INT_MAX;
void *buffer = NULL;
@ -518,8 +518,17 @@ fw_get_filesystem_firmware(struct device *device, struct fw_priv *fw_priv,
if (!fw_path[i][0])
continue;
len = snprintf(path, PATH_MAX, "%s/%s%s",
fw_path[i], fw_priv->fw_name, suffix);
/* strip off \n from customized path */
maxlen = strlen(fw_path[i]);
if (i == 0) {
nt = strchr(fw_path[i], '\n');
if (nt)
maxlen = nt - fw_path[i];
}
len = snprintf(path, PATH_MAX, "%.*s/%s%s",
maxlen, fw_path[i],
fw_priv->fw_name, suffix);
if (len >= PATH_MAX) {
rc = -ENAMETOOLONG;
break;
@ -791,6 +800,50 @@ static void fw_abort_batch_reqs(struct firmware *fw)
mutex_unlock(&fw_lock);
}
#if defined(CONFIG_FW_LOADER_DEBUG)
#include <crypto/hash.h>
#include <crypto/sha2.h>
static void fw_log_firmware_info(const struct firmware *fw, const char *name, struct device *device)
{
struct shash_desc *shash;
struct crypto_shash *alg;
u8 *sha256buf;
char *outbuf;
alg = crypto_alloc_shash("sha256", 0, 0);
if (!alg)
return;
sha256buf = kmalloc(SHA256_DIGEST_SIZE, GFP_KERNEL);
outbuf = kmalloc(SHA256_BLOCK_SIZE + 1, GFP_KERNEL);
shash = kmalloc(sizeof(*shash) + crypto_shash_descsize(alg), GFP_KERNEL);
if (!sha256buf || !outbuf || !shash)
goto out_free;
shash->tfm = alg;
if (crypto_shash_digest(shash, fw->data, fw->size, sha256buf) < 0)
goto out_shash;
for (int i = 0; i < SHA256_DIGEST_SIZE; i++)
sprintf(&outbuf[i * 2], "%02x", sha256buf[i]);
outbuf[SHA256_BLOCK_SIZE] = 0;
dev_dbg(device, "Loaded FW: %s, sha256: %s\n", name, outbuf);
out_shash:
crypto_free_shash(alg);
out_free:
kfree(shash);
kfree(outbuf);
kfree(sha256buf);
}
#else
static void fw_log_firmware_info(const struct firmware *fw, const char *name,
struct device *device)
{}
#endif
/* called from request_firmware() and request_firmware_work_func() */
static int
_request_firmware(const struct firmware **firmware_p, const char *name,
@ -861,11 +914,13 @@ _request_firmware(const struct firmware **firmware_p, const char *name,
revert_creds(old_cred);
put_cred(kern_cred);
out:
out:
if (ret < 0) {
fw_abort_batch_reqs(fw);
release_firmware(fw);
fw = NULL;
} else {
fw_log_firmware_info(fw, name, device);
}
*firmware_p = fw;

View File

@ -25,7 +25,7 @@ void __fw_load_abort(struct fw_priv *fw_priv)
}
#ifdef CONFIG_FW_LOADER_USER_HELPER
static ssize_t timeout_show(struct class *class, struct class_attribute *attr,
static ssize_t timeout_show(const struct class *class, const struct class_attribute *attr,
char *buf)
{
return sysfs_emit(buf, "%d\n", __firmware_loading_timeout());
@ -44,7 +44,7 @@ static ssize_t timeout_show(struct class *class, struct class_attribute *attr,
*
* Note: zero means 'wait forever'.
**/
static ssize_t timeout_store(struct class *class, struct class_attribute *attr,
static ssize_t timeout_store(const struct class *class, const struct class_attribute *attr,
const char *buf, size_t count)
{
int tmp_loading_timeout = simple_strtol(buf, NULL, 10);

View File

@ -8,7 +8,7 @@
#include <linux/device.h>
#ifdef CONFIG_ACPI
extern bool dev_add_physical_location(struct device *dev);
bool dev_add_physical_location(struct device *dev);
extern const struct attribute_group dev_attr_physical_location_group;
#else
static inline bool dev_add_physical_location(struct device *dev) { return false; };

View File

@ -210,7 +210,7 @@ void wakeup_source_sysfs_remove(struct wakeup_source *ws)
static int __init wakeup_sources_sysfs_init(void)
{
wakeup_class = class_create(THIS_MODULE, "wakeup");
wakeup_class = class_create("wakeup");
return PTR_ERR_OR_ZERO(wakeup_class);
}

View File

@ -37,8 +37,10 @@ EXPORT_SYMBOL_GPL(__dev_fwnode_const);
* @propname: Name of the property
*
* Check if property @propname is present in the device firmware description.
*
* Return: true if property @propname is present. Otherwise, returns false.
*/
bool device_property_present(struct device *dev, const char *propname)
bool device_property_present(const struct device *dev, const char *propname)
{
return fwnode_property_present(dev_fwnode(dev), propname);
}
@ -48,6 +50,8 @@ EXPORT_SYMBOL_GPL(device_property_present);
* fwnode_property_present - check if a property of a firmware node is present
* @fwnode: Firmware node whose property to check
* @propname: Name of the property
*
* Return: true if property @propname is present. Otherwise, returns false.
*/
bool fwnode_property_present(const struct fwnode_handle *fwnode,
const char *propname)
@ -86,7 +90,7 @@ EXPORT_SYMBOL_GPL(fwnode_property_present);
* %-EOVERFLOW if the size of the property is not as expected.
* %-ENXIO if no suitable firmware interface is present.
*/
int device_property_read_u8_array(struct device *dev, const char *propname,
int device_property_read_u8_array(const struct device *dev, const char *propname,
u8 *val, size_t nval)
{
return fwnode_property_read_u8_array(dev_fwnode(dev), propname, val, nval);
@ -114,7 +118,7 @@ EXPORT_SYMBOL_GPL(device_property_read_u8_array);
* %-EOVERFLOW if the size of the property is not as expected.
* %-ENXIO if no suitable firmware interface is present.
*/
int device_property_read_u16_array(struct device *dev, const char *propname,
int device_property_read_u16_array(const struct device *dev, const char *propname,
u16 *val, size_t nval)
{
return fwnode_property_read_u16_array(dev_fwnode(dev), propname, val, nval);
@ -142,7 +146,7 @@ EXPORT_SYMBOL_GPL(device_property_read_u16_array);
* %-EOVERFLOW if the size of the property is not as expected.
* %-ENXIO if no suitable firmware interface is present.
*/
int device_property_read_u32_array(struct device *dev, const char *propname,
int device_property_read_u32_array(const struct device *dev, const char *propname,
u32 *val, size_t nval)
{
return fwnode_property_read_u32_array(dev_fwnode(dev), propname, val, nval);
@ -170,7 +174,7 @@ EXPORT_SYMBOL_GPL(device_property_read_u32_array);
* %-EOVERFLOW if the size of the property is not as expected.
* %-ENXIO if no suitable firmware interface is present.
*/
int device_property_read_u64_array(struct device *dev, const char *propname,
int device_property_read_u64_array(const struct device *dev, const char *propname,
u64 *val, size_t nval)
{
return fwnode_property_read_u64_array(dev_fwnode(dev), propname, val, nval);
@ -198,7 +202,7 @@ EXPORT_SYMBOL_GPL(device_property_read_u64_array);
* %-EOVERFLOW if the size of the property is not as expected.
* %-ENXIO if no suitable firmware interface is present.
*/
int device_property_read_string_array(struct device *dev, const char *propname,
int device_property_read_string_array(const struct device *dev, const char *propname,
const char **val, size_t nval)
{
return fwnode_property_read_string_array(dev_fwnode(dev), propname, val, nval);
@ -220,7 +224,7 @@ EXPORT_SYMBOL_GPL(device_property_read_string_array);
* %-EPROTO or %-EILSEQ if the property type is not a string.
* %-ENXIO if no suitable firmware interface is present.
*/
int device_property_read_string(struct device *dev, const char *propname,
int device_property_read_string(const struct device *dev, const char *propname,
const char **val)
{
return fwnode_property_read_string(dev_fwnode(dev), propname, val);
@ -242,7 +246,7 @@ EXPORT_SYMBOL_GPL(device_property_read_string);
* %-EPROTO if the property is not an array of strings,
* %-ENXIO if no suitable firmware interface is present.
*/
int device_property_match_string(struct device *dev, const char *propname,
int device_property_match_string(const struct device *dev, const char *propname,
const char *string)
{
return fwnode_property_match_string(dev_fwnode(dev), propname, string);
@ -508,10 +512,10 @@ EXPORT_SYMBOL_GPL(fwnode_property_match_string);
* Obtain a reference based on a named property in an fwnode, with
* integer arguments.
*
* Caller is responsible to call fwnode_handle_put() on the returned
* args->fwnode pointer.
* The caller is responsible for calling fwnode_handle_put() on the returned
* @args->fwnode pointer.
*
* Returns: %0 on success
* Return: %0 on success
* %-ENOENT when the index is out of bounds, the index has an empty
* reference or the property was not found
* %-EINVAL on parse error
@ -547,8 +551,11 @@ EXPORT_SYMBOL_GPL(fwnode_property_get_reference_args);
*
* @index can be used when the named reference holds a table of references.
*
* Returns pointer to the reference fwnode, or ERR_PTR. Caller is responsible to
* call fwnode_handle_put() on the returned fwnode pointer.
* The caller is responsible for calling fwnode_handle_put() on the returned
* fwnode pointer.
*
* Return: a pointer to the reference fwnode, when found. Otherwise,
* returns an error pointer.
*/
struct fwnode_handle *fwnode_find_reference(const struct fwnode_handle *fwnode,
const char *name,
@ -567,7 +574,7 @@ EXPORT_SYMBOL_GPL(fwnode_find_reference);
* fwnode_get_name - Return the name of a node
* @fwnode: The firmware node
*
* Returns a pointer to the node name.
* Return: a pointer to the node name, or %NULL.
*/
const char *fwnode_get_name(const struct fwnode_handle *fwnode)
{
@ -579,7 +586,7 @@ EXPORT_SYMBOL_GPL(fwnode_get_name);
* fwnode_get_name_prefix - Return the prefix of node for printing purposes
* @fwnode: The firmware node
*
* Returns the prefix of a node, intended to be printed right before the node.
* Return: the prefix of a node, intended to be printed right before the node.
* The prefix works also as a separator between the nodes.
*/
const char *fwnode_get_name_prefix(const struct fwnode_handle *fwnode)
@ -591,7 +598,10 @@ const char *fwnode_get_name_prefix(const struct fwnode_handle *fwnode)
* fwnode_get_parent - Return parent firwmare node
* @fwnode: Firmware whose parent is retrieved
*
* Return parent firmware node of the given node if possible or %NULL if no
* The caller is responsible for calling fwnode_handle_put() on the returned
* fwnode pointer.
*
* Return: parent firmware node of the given node if possible or %NULL if no
* parent was available.
*/
struct fwnode_handle *fwnode_get_parent(const struct fwnode_handle *fwnode)
@ -608,8 +618,12 @@ EXPORT_SYMBOL_GPL(fwnode_get_parent);
* on the passed node, making it suitable for iterating through a
* node's parents.
*
* Returns a node pointer with refcount incremented, use
* fwnode_handle_put() on it when done.
* The caller is responsible for calling fwnode_handle_put() on the returned
* fwnode pointer. Note that this function also puts a reference to @fwnode
* unconditionally.
*
* Return: parent firmware node of the given node if possible or %NULL if no
* parent was available.
*/
struct fwnode_handle *fwnode_get_next_parent(struct fwnode_handle *fwnode)
{
@ -629,10 +643,12 @@ EXPORT_SYMBOL_GPL(fwnode_get_next_parent);
* firmware node that has a corresponding struct device and returns that struct
* device.
*
* The caller of this function is expected to call put_device() on the returned
* device when they are done.
* The caller is responsible for calling put_device() on the returned device
* pointer.
*
* Return: a pointer to the device of the @fwnode's closest ancestor.
*/
struct device *fwnode_get_next_parent_dev(struct fwnode_handle *fwnode)
struct device *fwnode_get_next_parent_dev(const struct fwnode_handle *fwnode)
{
struct fwnode_handle *parent;
struct device *dev;
@ -651,7 +667,7 @@ struct device *fwnode_get_next_parent_dev(struct fwnode_handle *fwnode)
* fwnode_count_parents - Return the number of parents a node has
* @fwnode: The node the parents of which are to be counted
*
* Returns the number of parents a node has.
* Return: the number of parents a node has.
*/
unsigned int fwnode_count_parents(const struct fwnode_handle *fwnode)
{
@ -670,12 +686,12 @@ EXPORT_SYMBOL_GPL(fwnode_count_parents);
* @fwnode: The node the parent of which is requested
* @depth: Distance of the parent from the node
*
* Returns the nth parent of a node. If there is no parent at the requested
* The caller is responsible for calling fwnode_handle_put() on the returned
* fwnode pointer.
*
* Return: the nth parent of a node. If there is no parent at the requested
* @depth, %NULL is returned. If @depth is 0, the functionality is equivalent to
* fwnode_handle_get(). For @depth == 1, it is fwnode_get_parent() and so on.
*
* The caller is responsible for calling fwnode_handle_put() for the returned
* node.
*/
struct fwnode_handle *fwnode_get_nth_parent(struct fwnode_handle *fwnode,
unsigned int depth)
@ -700,9 +716,9 @@ EXPORT_SYMBOL_GPL(fwnode_get_nth_parent);
*
* A node is considered an ancestor of itself too.
*
* Returns true if @ancestor is an ancestor of @child. Otherwise, returns false.
* Return: true if @ancestor is an ancestor of @child. Otherwise, returns false.
*/
bool fwnode_is_ancestor_of(struct fwnode_handle *ancestor, struct fwnode_handle *child)
bool fwnode_is_ancestor_of(const struct fwnode_handle *ancestor, const struct fwnode_handle *child)
{
struct fwnode_handle *parent;
@ -725,6 +741,10 @@ bool fwnode_is_ancestor_of(struct fwnode_handle *ancestor, struct fwnode_handle
* fwnode_get_next_child_node - Return the next child node handle for a node
* @fwnode: Firmware node to find the next child node for.
* @child: Handle to one of the node's child nodes or a %NULL handle.
*
* The caller is responsible for calling fwnode_handle_put() on the returned
* fwnode pointer. Note that this function also puts a reference to @child
* unconditionally.
*/
struct fwnode_handle *
fwnode_get_next_child_node(const struct fwnode_handle *fwnode,
@ -735,10 +755,13 @@ fwnode_get_next_child_node(const struct fwnode_handle *fwnode,
EXPORT_SYMBOL_GPL(fwnode_get_next_child_node);
/**
* fwnode_get_next_available_child_node - Return the next
* available child node handle for a node
* fwnode_get_next_available_child_node - Return the next available child node handle for a node
* @fwnode: Firmware node to find the next child node for.
* @child: Handle to one of the node's child nodes or a %NULL handle.
*
* The caller is responsible for calling fwnode_handle_put() on the returned
* fwnode pointer. Note that this function also puts a reference to @child
* unconditionally.
*/
struct fwnode_handle *
fwnode_get_next_available_child_node(const struct fwnode_handle *fwnode,
@ -762,7 +785,11 @@ EXPORT_SYMBOL_GPL(fwnode_get_next_available_child_node);
/**
* device_get_next_child_node - Return the next child node handle for a device
* @dev: Device to find the next child node for.
* @child: Handle to one of the device's child nodes or a null handle.
* @child: Handle to one of the device's child nodes or a %NULL handle.
*
* The caller is responsible for calling fwnode_handle_put() on the returned
* fwnode pointer. Note that this function also puts a reference to @child
* unconditionally.
*/
struct fwnode_handle *device_get_next_child_node(const struct device *dev,
struct fwnode_handle *child)
@ -787,6 +814,9 @@ EXPORT_SYMBOL_GPL(device_get_next_child_node);
* fwnode_get_named_child_node - Return first matching named child node handle
* @fwnode: Firmware node to find the named child node for.
* @childname: String to match child node name against.
*
* The caller is responsible for calling fwnode_handle_put() on the returned
* fwnode pointer.
*/
struct fwnode_handle *
fwnode_get_named_child_node(const struct fwnode_handle *fwnode,
@ -800,6 +830,9 @@ EXPORT_SYMBOL_GPL(fwnode_get_named_child_node);
* device_get_named_child_node - Return first matching named child node handle
* @dev: Device to find the named child node for.
* @childname: String to match child node name against.
*
* The caller is responsible for calling fwnode_handle_put() on the returned
* fwnode pointer.
*/
struct fwnode_handle *device_get_named_child_node(const struct device *dev,
const char *childname)
@ -812,7 +845,10 @@ EXPORT_SYMBOL_GPL(device_get_named_child_node);
* fwnode_handle_get - Obtain a reference to a device node
* @fwnode: Pointer to the device node to obtain the reference to.
*
* Returns the fwnode handle.
* The caller is responsible for calling fwnode_handle_put() on the returned
* fwnode pointer.
*
* Return: the fwnode handle.
*/
struct fwnode_handle *fwnode_handle_get(struct fwnode_handle *fwnode)
{
@ -841,6 +877,8 @@ EXPORT_SYMBOL_GPL(fwnode_handle_put);
* fwnode_device_is_available - check if a device is available for use
* @fwnode: Pointer to the fwnode of the device.
*
* Return: true if device is available for use. Otherwise, returns false.
*
* For fwnode node types that don't implement the .device_is_available()
* operation, this function returns true.
*/
@ -859,6 +897,8 @@ EXPORT_SYMBOL_GPL(fwnode_device_is_available);
/**
* device_get_child_node_count - return the number of child nodes for device
* @dev: Device to cound the child nodes for
*
* Return: the number of child nodes for a given device.
*/
unsigned int device_get_child_node_count(const struct device *dev)
{
@ -895,7 +935,7 @@ EXPORT_SYMBOL_GPL(device_get_dma_attr);
* 'phy-connection-type', and return its index in phy_modes table, or errno in
* error case.
*/
int fwnode_get_phy_mode(struct fwnode_handle *fwnode)
int fwnode_get_phy_mode(const struct fwnode_handle *fwnode)
{
const char *pm;
int err, i;
@ -934,7 +974,7 @@ EXPORT_SYMBOL_GPL(device_get_phy_mode);
* @fwnode: Pointer to the firmware node
* @index: Index of the IO range
*
* Returns a pointer to the mapped memory.
* Return: a pointer to the mapped memory.
*/
void __iomem *fwnode_iomap(struct fwnode_handle *fwnode, int index)
{
@ -947,8 +987,8 @@ EXPORT_SYMBOL(fwnode_iomap);
* @fwnode: Pointer to the firmware node
* @index: Zero-based index of the IRQ
*
* Returns Linux IRQ number on success. Other values are determined
* accordingly to acpi_/of_ irq_get() operation.
* Return: Linux IRQ number on success. Other values are determined
* according to acpi_irq_get() or of_irq_get() operation.
*/
int fwnode_irq_get(const struct fwnode_handle *fwnode, unsigned int index)
{
@ -967,8 +1007,7 @@ EXPORT_SYMBOL(fwnode_irq_get);
* number of the IRQ resource corresponding to the index of the matched
* string.
*
* Return:
* Linux IRQ number on success, or negative errno otherwise.
* Return: Linux IRQ number on success, or negative errno otherwise.
*/
int fwnode_irq_get_byname(const struct fwnode_handle *fwnode, const char *name)
{
@ -990,7 +1029,11 @@ EXPORT_SYMBOL(fwnode_irq_get_byname);
* @fwnode: Pointer to the parent firmware node
* @prev: Previous endpoint node or %NULL to get the first
*
* Returns an endpoint firmware node pointer or %NULL if no more endpoints
* The caller is responsible for calling fwnode_handle_put() on the returned
* fwnode pointer. Note that this function also puts a reference to @prev
* unconditionally.
*
* Return: an endpoint firmware node pointer or %NULL if no more endpoints
* are available.
*/
struct fwnode_handle *
@ -1030,6 +1073,9 @@ EXPORT_SYMBOL_GPL(fwnode_graph_get_next_endpoint);
* fwnode_graph_get_port_parent - Return the device fwnode of a port endpoint
* @endpoint: Endpoint firmware node of the port
*
* The caller is responsible for calling fwnode_handle_put() on the returned
* fwnode pointer.
*
* Return: the firmware node of the device the @endpoint belongs to.
*/
struct fwnode_handle *
@ -1051,6 +1097,9 @@ EXPORT_SYMBOL_GPL(fwnode_graph_get_port_parent);
* @fwnode: Endpoint firmware node pointing to the remote endpoint
*
* Extracts firmware node of a remote device the @fwnode points to.
*
* The caller is responsible for calling fwnode_handle_put() on the returned
* fwnode pointer.
*/
struct fwnode_handle *
fwnode_graph_get_remote_port_parent(const struct fwnode_handle *fwnode)
@ -1071,6 +1120,9 @@ EXPORT_SYMBOL_GPL(fwnode_graph_get_remote_port_parent);
* @fwnode: Endpoint firmware node pointing to the remote endpoint
*
* Extracts firmware node of a remote port the @fwnode points to.
*
* The caller is responsible for calling fwnode_handle_put() on the returned
* fwnode pointer.
*/
struct fwnode_handle *
fwnode_graph_get_remote_port(const struct fwnode_handle *fwnode)
@ -1084,6 +1136,9 @@ EXPORT_SYMBOL_GPL(fwnode_graph_get_remote_port);
* @fwnode: Endpoint firmware node pointing to the remote endpoint
*
* Extracts firmware node of a remote endpoint the @fwnode points to.
*
* The caller is responsible for calling fwnode_handle_put() on the returned
* fwnode pointer.
*/
struct fwnode_handle *
fwnode_graph_get_remote_endpoint(const struct fwnode_handle *fwnode)
@ -1111,8 +1166,11 @@ static bool fwnode_graph_remote_available(struct fwnode_handle *ep)
* @endpoint: identifier of the endpoint node under the port node
* @flags: fwnode lookup flags
*
* Return the fwnode handle of the local endpoint corresponding the port and
* endpoint IDs or NULL if not found.
* The caller is responsible for calling fwnode_handle_put() on the returned
* fwnode pointer.
*
* Return: the fwnode handle of the local endpoint corresponding the port and
* endpoint IDs or %NULL if not found.
*
* If FWNODE_GRAPH_ENDPOINT_NEXT is passed in @flags and the specified endpoint
* has not been found, look for the closest endpoint ID greater than the
@ -1120,9 +1178,6 @@ static bool fwnode_graph_remote_available(struct fwnode_handle *ep)
*
* Does not return endpoints that belong to disabled devices or endpoints that
* are unconnected, unless FWNODE_GRAPH_DEVICE_DISABLED is passed in @flags.
*
* The returned endpoint needs to be released by calling fwnode_handle_put() on
* it when it is not needed any more.
*/
struct fwnode_handle *
fwnode_graph_get_endpoint_by_id(const struct fwnode_handle *fwnode,
@ -1180,7 +1235,7 @@ EXPORT_SYMBOL_GPL(fwnode_graph_get_endpoint_by_id);
* If FWNODE_GRAPH_DEVICE_DISABLED flag is specified, also unconnected endpoints
* and endpoints connected to disabled devices are counted.
*/
unsigned int fwnode_graph_get_endpoint_count(struct fwnode_handle *fwnode,
unsigned int fwnode_graph_get_endpoint_count(const struct fwnode_handle *fwnode,
unsigned long flags)
{
struct fwnode_handle *ep;
@ -1328,7 +1383,8 @@ EXPORT_SYMBOL_GPL(fwnode_connection_find_match);
* @fwnode and other device nodes. @match will be used to convert the
* connection description to data the caller is expecting to be returned
* through the @matches array.
* If @matches is NULL @matches_len is ignored and the total number of resolved
*
* If @matches is %NULL @matches_len is ignored and the total number of resolved
* matches is returned.
*
* Return: Number of matches resolved, or negative errno.

View File

@ -7,6 +7,7 @@
#include <linux/sysfs.h>
#include <linux/init.h>
#include <linux/of.h>
#include <linux/stat.h>
#include <linux/slab.h>
#include <linux/idr.h>
@ -110,6 +111,18 @@ static void soc_release(struct device *dev)
kfree(soc_dev);
}
static void soc_device_get_machine(struct soc_device_attribute *soc_dev_attr)
{
struct device_node *np;
if (soc_dev_attr->machine)
return;
np = of_find_node_by_path("/");
of_property_read_string(np, "model", &soc_dev_attr->machine);
of_node_put(np);
}
static struct soc_device_attribute *early_soc_dev_attr;
struct soc_device *soc_device_register(struct soc_device_attribute *soc_dev_attr)
@ -118,6 +131,8 @@ struct soc_device *soc_device_register(struct soc_device_attribute *soc_dev_attr
const struct attribute_group **soc_attr_groups;
int ret;
soc_device_get_machine(soc_dev_attr);
if (!soc_bus_registered) {
if (early_soc_dev_attr)
return ERR_PTR(-EBUSY);

View File

@ -290,7 +290,7 @@ aoechr_init(void)
}
init_completion(&emsgs_comp);
spin_lock_init(&emsgs_lock);
aoe_class = class_create(THIS_MODULE, "aoe");
aoe_class = class_create("aoe");
if (IS_ERR(aoe_class)) {
unregister_chrdev(AOE_MAJOR, "aoechr");
return PTR_ERR(aoe_class);

View File

@ -100,7 +100,8 @@ static struct mutex ctl_mutex; /* Serialize open/close/setup/teardown */
static mempool_t psd_pool;
static struct bio_set pkt_bio_set;
static struct class *class_pktcdvd = NULL; /* /sys/class/pktcdvd */
/* /sys/class/pktcdvd */
static struct class class_pktcdvd;
static struct dentry *pkt_debugfs_root = NULL; /* /sys/kernel/debug/pktcdvd */
/* forward declaration */
@ -315,8 +316,8 @@ static const struct attribute_group *pkt_groups[] = {
static void pkt_sysfs_dev_new(struct pktcdvd_device *pd)
{
if (class_pktcdvd) {
pd->dev = device_create_with_groups(class_pktcdvd, NULL,
if (class_is_registered(&class_pktcdvd)) {
pd->dev = device_create_with_groups(&class_pktcdvd, NULL,
MKDEV(0, 0), pd, pkt_groups,
"%s", pd->name);
if (IS_ERR(pd->dev))
@ -326,7 +327,7 @@ static void pkt_sysfs_dev_new(struct pktcdvd_device *pd)
static void pkt_sysfs_dev_remove(struct pktcdvd_device *pd)
{
if (class_pktcdvd)
if (class_is_registered(&class_pktcdvd))
device_unregister(pd->dev);
}
@ -338,12 +339,7 @@ static void pkt_sysfs_dev_remove(struct pktcdvd_device *pd)
device_map show mappings
*******************************************************************/
static void class_pktcdvd_release(struct class *cls)
{
kfree(cls);
}
static ssize_t device_map_show(struct class *c, struct class_attribute *attr,
static ssize_t device_map_show(const struct class *c, const struct class_attribute *attr,
char *data)
{
int n = 0;
@ -364,7 +360,7 @@ static ssize_t device_map_show(struct class *c, struct class_attribute *attr,
}
static CLASS_ATTR_RO(device_map);
static ssize_t add_store(struct class *c, struct class_attribute *attr,
static ssize_t add_store(const struct class *c, const struct class_attribute *attr,
const char *buf, size_t count)
{
unsigned int major, minor;
@ -385,7 +381,7 @@ static ssize_t add_store(struct class *c, struct class_attribute *attr,
}
static CLASS_ATTR_WO(add);
static ssize_t remove_store(struct class *c, struct class_attribute *attr,
static ssize_t remove_store(const struct class *c, const struct class_attribute *attr,
const char *buf, size_t count)
{
unsigned int major, minor;
@ -405,36 +401,23 @@ static struct attribute *class_pktcdvd_attrs[] = {
};
ATTRIBUTE_GROUPS(class_pktcdvd);
static struct class class_pktcdvd = {
.name = DRIVER_NAME,
.class_groups = class_pktcdvd_groups,
};
static int pkt_sysfs_init(void)
{
int ret = 0;
/*
* create control files in sysfs
* /sys/class/pktcdvd/...
*/
class_pktcdvd = kzalloc(sizeof(*class_pktcdvd), GFP_KERNEL);
if (!class_pktcdvd)
return -ENOMEM;
class_pktcdvd->name = DRIVER_NAME;
class_pktcdvd->owner = THIS_MODULE;
class_pktcdvd->class_release = class_pktcdvd_release;
class_pktcdvd->class_groups = class_pktcdvd_groups;
ret = class_register(class_pktcdvd);
if (ret) {
kfree(class_pktcdvd);
class_pktcdvd = NULL;
pr_err("failed to create class pktcdvd\n");
return ret;
}
return 0;
return class_register(&class_pktcdvd);
}
static void pkt_sysfs_cleanup(void)
{
if (class_pktcdvd)
class_destroy(class_pktcdvd);
class_pktcdvd = NULL;
class_unregister(&class_pktcdvd);
}
/********************************************************************

View File

@ -491,12 +491,12 @@ static bool single_major = true;
module_param(single_major, bool, 0444);
MODULE_PARM_DESC(single_major, "Use a single major number for all rbd devices (default: true)");
static ssize_t add_store(struct bus_type *bus, const char *buf, size_t count);
static ssize_t remove_store(struct bus_type *bus, const char *buf,
static ssize_t add_store(const struct bus_type *bus, const char *buf, size_t count);
static ssize_t remove_store(const struct bus_type *bus, const char *buf,
size_t count);
static ssize_t add_single_major_store(struct bus_type *bus, const char *buf,
static ssize_t add_single_major_store(const struct bus_type *bus, const char *buf,
size_t count);
static ssize_t remove_single_major_store(struct bus_type *bus, const char *buf,
static ssize_t remove_single_major_store(const struct bus_type *bus, const char *buf,
size_t count);
static int rbd_dev_image_probe(struct rbd_device *rbd_dev, int depth);
@ -538,7 +538,7 @@ static bool rbd_is_lock_owner(struct rbd_device *rbd_dev)
return is_lock_owner;
}
static ssize_t supported_features_show(struct bus_type *bus, char *buf)
static ssize_t supported_features_show(const struct bus_type *bus, char *buf)
{
return sprintf(buf, "0x%llx\n", RBD_FEATURES_SUPPORTED);
}
@ -6967,9 +6967,7 @@ err_out_format:
return ret;
}
static ssize_t do_rbd_add(struct bus_type *bus,
const char *buf,
size_t count)
static ssize_t do_rbd_add(const char *buf, size_t count)
{
struct rbd_device *rbd_dev = NULL;
struct ceph_options *ceph_opts = NULL;
@ -7081,18 +7079,18 @@ err_out_args:
goto out;
}
static ssize_t add_store(struct bus_type *bus, const char *buf, size_t count)
static ssize_t add_store(const struct bus_type *bus, const char *buf, size_t count)
{
if (single_major)
return -EINVAL;
return do_rbd_add(bus, buf, count);
return do_rbd_add(buf, count);
}
static ssize_t add_single_major_store(struct bus_type *bus, const char *buf,
static ssize_t add_single_major_store(const struct bus_type *bus, const char *buf,
size_t count)
{
return do_rbd_add(bus, buf, count);
return do_rbd_add(buf, count);
}
static void rbd_dev_remove_parent(struct rbd_device *rbd_dev)
@ -7122,9 +7120,7 @@ static void rbd_dev_remove_parent(struct rbd_device *rbd_dev)
}
}
static ssize_t do_rbd_remove(struct bus_type *bus,
const char *buf,
size_t count)
static ssize_t do_rbd_remove(const char *buf, size_t count)
{
struct rbd_device *rbd_dev = NULL;
struct list_head *tmp;
@ -7196,18 +7192,18 @@ static ssize_t do_rbd_remove(struct bus_type *bus,
return count;
}
static ssize_t remove_store(struct bus_type *bus, const char *buf, size_t count)
static ssize_t remove_store(const struct bus_type *bus, const char *buf, size_t count)
{
if (single_major)
return -EINVAL;
return do_rbd_remove(bus, buf, count);
return do_rbd_remove(buf, count);
}
static ssize_t remove_single_major_store(struct bus_type *bus, const char *buf,
static ssize_t remove_single_major_store(const struct bus_type *bus, const char *buf,
size_t count)
{
return do_rbd_remove(bus, buf, count);
return do_rbd_remove(buf, count);
}
/*

View File

@ -646,7 +646,7 @@ int rnbd_clt_create_sysfs_files(void)
{
int err;
rnbd_dev_class = class_create(THIS_MODULE, "rnbd-client");
rnbd_dev_class = class_create("rnbd-client");
if (IS_ERR(rnbd_dev_class))
return PTR_ERR(rnbd_dev_class);

View File

@ -215,7 +215,7 @@ int rnbd_srv_create_sysfs_files(void)
{
int err;
rnbd_dev_class = class_create(THIS_MODULE, "rnbd-server");
rnbd_dev_class = class_create("rnbd-server");
if (IS_ERR(rnbd_dev_class))
return PTR_ERR(rnbd_dev_class);

View File

@ -2311,7 +2311,7 @@ static int __init ublk_init(void)
if (ret)
goto unregister_mis;
ublk_chr_class = class_create(THIS_MODULE, "ublk-char");
ublk_chr_class = class_create("ublk-char");
if (IS_ERR(ublk_chr_class)) {
ret = PTR_ERR(ublk_chr_class);
goto free_chrdev_region;

View File

@ -2424,8 +2424,8 @@ static int zram_remove(struct zram *zram)
* creates a new un-initialized zram device and returns back this device's
* device_id (or an error code if it fails to create a new device).
*/
static ssize_t hot_add_show(struct class *class,
struct class_attribute *attr,
static ssize_t hot_add_show(const struct class *class,
const struct class_attribute *attr,
char *buf)
{
int ret;
@ -2438,11 +2438,12 @@ static ssize_t hot_add_show(struct class *class,
return ret;
return scnprintf(buf, PAGE_SIZE, "%d\n", ret);
}
/* This attribute must be set to 0400, so CLASS_ATTR_RO() can not be used */
static struct class_attribute class_attr_hot_add =
__ATTR(hot_add, 0400, hot_add_show, NULL);
static ssize_t hot_remove_store(struct class *class,
struct class_attribute *attr,
static ssize_t hot_remove_store(const struct class *class,
const struct class_attribute *attr,
const char *buf,
size_t count)
{
@ -2481,7 +2482,6 @@ ATTRIBUTE_GROUPS(zram_control_class);
static struct class zram_control_class = {
.name = "zram-control",
.owner = THIS_MODULE,
.class_groups = zram_control_class_groups,
};

View File

@ -231,7 +231,7 @@ exit:
return 0;
}
static ssize_t rescan_store(struct bus_type *bus,
static ssize_t rescan_store(const struct bus_type *bus,
const char *buf, size_t count)
{
unsigned long val;
@ -284,7 +284,7 @@ exit:
return 0;
}
static ssize_t autorescan_store(struct bus_type *bus,
static ssize_t autorescan_store(const struct bus_type *bus,
const char *buf, size_t count)
{
bus_for_each_dev(bus, NULL, (void *)buf, fsl_mc_bus_set_autorescan);
@ -292,7 +292,7 @@ static ssize_t autorescan_store(struct bus_type *bus,
return count;
}
static ssize_t autorescan_show(struct bus_type *bus, char *buf)
static ssize_t autorescan_show(const struct bus_type *bus, char *buf)
{
bus_for_each_dev(bus, NULL, (void *)buf, fsl_mc_bus_get_autorescan);
return strlen(buf);

View File

@ -293,7 +293,7 @@ static int __init bsr_init(void)
if (!np)
goto out_err;
bsr_class = class_create(THIS_MODULE, "bsr");
bsr_class = class_create("bsr");
if (IS_ERR(bsr_class)) {
printk(KERN_ERR "class_create() failed for bsr_class\n");
ret = PTR_ERR(bsr_class);

View File

@ -504,7 +504,7 @@ static int __init dsp56k_init_driver(void)
printk("DSP56k driver: Unable to register driver\n");
return -ENODEV;
}
dsp56k_class = class_create(THIS_MODULE, "dsp56k");
dsp56k_class = class_create("dsp56k");
if (IS_ERR(dsp56k_class)) {
err = PTR_ERR(dsp56k_class);
goto out_chrdev;

View File

@ -860,7 +860,7 @@ static int __init init_ipmi_devintf(void)
pr_info("ipmi device interface\n");
ipmi_class = class_create(THIS_MODULE, "ipmi");
ipmi_class = class_create("ipmi");
if (IS_ERR(ipmi_class)) {
pr_err("ipmi: can't register device class\n");
return PTR_ERR(ipmi_class);

View File

@ -1049,7 +1049,7 @@ static int __init lp_init(void)
return -EIO;
}
lp_class = class_create(THIS_MODULE, "printer");
lp_class = class_create("printer");
if (IS_ERR(lp_class)) {
err = PTR_ERR(lp_class);
goto out_reg;

View File

@ -762,7 +762,7 @@ static int __init chr_dev_init(void)
if (register_chrdev(MEM_MAJOR, "mem", &memory_fops))
printk("unable to get major %d for memory devs\n", MEM_MAJOR);
mem_class = class_create(THIS_MODULE, "mem");
mem_class = class_create("mem");
if (IS_ERR(mem_class))
return PTR_ERR(mem_class);

View File

@ -286,7 +286,7 @@ static int __init misc_init(void)
struct proc_dir_entry *ret;
ret = proc_create_seq("misc", 0, NULL, &misc_seq_ops);
misc_class = class_create(THIS_MODULE, "misc");
misc_class = class_create("misc");
err = PTR_ERR(misc_class);
if (IS_ERR(misc_class))
goto fail_remove;

View File

@ -1878,7 +1878,7 @@ static int __init cmm_init(void)
{
int rc;
cmm_class = class_create(THIS_MODULE, "cardman_4000");
cmm_class = class_create("cardman_4000");
if (IS_ERR(cmm_class))
return PTR_ERR(cmm_class);

View File

@ -650,7 +650,7 @@ static int __init cm4040_init(void)
{
int rc;
cmx_class = class_create(THIS_MODULE, "cardman_4040");
cmx_class = class_create("cardman_4040");
if (IS_ERR(cmx_class))
return PTR_ERR(cmx_class);

View File

@ -325,7 +325,7 @@ static int __init scr24x_init(void)
{
int ret;
scr24x_class = class_create(THIS_MODULE, "scr24x");
scr24x_class = class_create("scr24x");
if (IS_ERR(scr24x_class))
return PTR_ERR(scr24x_class);

View File

@ -841,7 +841,7 @@ static int __init ppdev_init(void)
pr_warn(CHRDEV ": unable to get major %d\n", PP_MAJOR);
return -EIO;
}
ppdev_class = class_create(THIS_MODULE, CHRDEV);
ppdev_class = class_create(CHRDEV);
if (IS_ERR(ppdev_class)) {
err = PTR_ERR(ppdev_class);
goto out_chrdev;

View File

@ -282,7 +282,7 @@ static void tpm_dev_release(struct device *dev)
*
* Return: always 0 (i.e. success)
*/
static int tpm_class_shutdown(struct device *dev)
int tpm_class_shutdown(struct device *dev)
{
struct tpm_chip *chip = container_of(dev, struct tpm_chip, dev);
@ -337,7 +337,6 @@ struct tpm_chip *tpm_chip_alloc(struct device *pdev,
device_initialize(&chip->dev);
chip->dev.class = tpm_class;
chip->dev.class->shutdown_pre = tpm_class_shutdown;
chip->dev.release = tpm_dev_release;
chip->dev.parent = pdev;
chip->dev.groups = chip->groups;

View File

@ -466,13 +466,15 @@ static int __init tpm_init(void)
{
int rc;
tpm_class = class_create(THIS_MODULE, "tpm");
tpm_class = class_create("tpm");
if (IS_ERR(tpm_class)) {
pr_err("couldn't create tpm class\n");
return PTR_ERR(tpm_class);
}
tpmrm_class = class_create(THIS_MODULE, "tpmrm");
tpm_class->shutdown_pre = tpm_class_shutdown;
tpmrm_class = class_create("tpmrm");
if (IS_ERR(tpmrm_class)) {
pr_err("couldn't create tpmrm class\n");
rc = PTR_ERR(tpmrm_class);

View File

@ -256,6 +256,7 @@ int tpm1_get_pcr_allocation(struct tpm_chip *chip);
unsigned long tpm_calc_ordinal_duration(struct tpm_chip *chip, u32 ordinal);
int tpm_pm_suspend(struct device *dev);
int tpm_pm_resume(struct device *dev);
int tpm_class_shutdown(struct device *dev);
static inline void tpm_msleep(unsigned int delay_msec)
{

View File

@ -2244,7 +2244,7 @@ static int __init virtio_console_init(void)
{
int err;
pdrvdata.class = class_create(THIS_MODULE, "virtio-ports");
pdrvdata.class = class_create("virtio-ports");
if (IS_ERR(pdrvdata.class)) {
err = PTR_ERR(pdrvdata.class);
pr_err("Error %d creating virtio-ports class\n", err);

View File

@ -856,7 +856,7 @@ static int __init hwicap_module_init(void)
dev_t devt;
int retval;
icap_class = class_create(THIS_MODULE, "xilinx_config");
icap_class = class_create("xilinx_config");
mutex_init(&icap_sem);
devt = MKDEV(XHWICAP_MAJOR, XHWICAP_MINOR);

View File

@ -242,7 +242,7 @@ EXPORT_SYMBOL(xillybus_find_inode);
static int __init xillybus_class_init(void)
{
xillybus_class = class_create(THIS_MODULE, "xillybus");
xillybus_class = class_create("xillybus");
if (IS_ERR(xillybus_class)) {
pr_warn("Failed to register xillybus class\n");

View File

@ -3383,7 +3383,7 @@ static int __init comedi_init(void)
if (retval)
goto out_unregister_chrdev_region;
comedi_class = class_create(THIS_MODULE, "comedi");
comedi_class = class_create("comedi");
if (IS_ERR(comedi_class)) {
retval = PTR_ERR(comedi_class);
pr_err("failed to create class\n");

View File

@ -795,7 +795,7 @@ static int __init comedi_test_init(void)
}
if (!config_mode) {
ctcls = class_create(THIS_MODULE, CLASS_NAME);
ctcls = class_create(CLASS_NAME);
if (IS_ERR(ctcls)) {
pr_warn("comedi_test: unable to create class\n");
goto clean3;

View File

@ -63,7 +63,6 @@ static struct cpufreq_driver *current_pstate_driver;
static struct cpufreq_driver amd_pstate_driver;
static struct cpufreq_driver amd_pstate_epp_driver;
static int cppc_state = AMD_PSTATE_DISABLE;
struct kobject *amd_pstate_kobj;
/*
* AMD Energy Preference Performance (EPP)
@ -1013,6 +1012,7 @@ static struct attribute *pstate_global_attributes[] = {
};
static const struct attribute_group amd_pstate_global_attr_group = {
.name = "amd_pstate",
.attrs = pstate_global_attributes,
};
@ -1334,6 +1334,7 @@ static struct cpufreq_driver amd_pstate_epp_driver = {
static int __init amd_pstate_init(void)
{
struct device *dev_root;
int ret;
if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD)
@ -1380,24 +1381,19 @@ static int __init amd_pstate_init(void)
if (ret)
pr_err("failed to register with return %d\n", ret);
amd_pstate_kobj = kobject_create_and_add("amd_pstate", &cpu_subsys.dev_root->kobj);
if (!amd_pstate_kobj) {
ret = -EINVAL;
pr_err("global sysfs registration failed.\n");
goto kobject_free;
}
ret = sysfs_create_group(amd_pstate_kobj, &amd_pstate_global_attr_group);
if (ret) {
pr_err("sysfs attribute export failed with error %d.\n", ret);
goto global_attr_free;
dev_root = bus_get_dev_root(&cpu_subsys);
if (dev_root) {
ret = sysfs_create_group(&dev_root->kobj, &amd_pstate_global_attr_group);
put_device(dev_root);
if (ret) {
pr_err("sysfs attribute export failed with error %d.\n", ret);
goto global_attr_free;
}
}
return ret;
global_attr_free:
kobject_put(amd_pstate_kobj);
kobject_free:
cpufreq_unregister_driver(current_pstate_driver);
return ret;
}

View File

@ -2937,11 +2937,16 @@ EXPORT_SYMBOL_GPL(cpufreq_unregister_driver);
static int __init cpufreq_core_init(void)
{
struct cpufreq_governor *gov = cpufreq_default_governor();
struct device *dev_root;
if (cpufreq_disabled())
return -ENODEV;
cpufreq_global_kobject = kobject_create_and_add("cpufreq", &cpu_subsys.dev_root->kobj);
dev_root = bus_get_dev_root(&cpu_subsys);
if (dev_root) {
cpufreq_global_kobject = kobject_create_and_add("cpufreq", &dev_root->kobj);
put_device(dev_root);
}
BUG_ON(!cpufreq_global_kobject);
if (!strlen(default_governor))

View File

@ -1473,10 +1473,13 @@ static struct kobject *intel_pstate_kobject;
static void __init intel_pstate_sysfs_expose_params(void)
{
struct device *dev_root = bus_get_dev_root(&cpu_subsys);
int rc;
intel_pstate_kobject = kobject_create_and_add("intel_pstate",
&cpu_subsys.dev_root->kobj);
if (dev_root) {
intel_pstate_kobject = kobject_create_and_add("intel_pstate", &dev_root->kobj);
put_device(dev_root);
}
if (WARN_ON(!intel_pstate_kobject))
return;

View File

@ -808,7 +808,7 @@ static int __init cpuidle_init(void)
if (cpuidle_disabled())
return -ENODEV;
return cpuidle_add_interface(cpu_subsys.dev_root);
return cpuidle_add_interface();
}
module_param(off, int, 0444);

View File

@ -30,7 +30,7 @@ extern int cpuidle_switch_governor(struct cpuidle_governor *gov);
struct device;
extern int cpuidle_add_interface(struct device *dev);
extern int cpuidle_add_interface(void);
extern void cpuidle_remove_interface(struct device *dev);
extern int cpuidle_add_device_sysfs(struct cpuidle_device *device);
extern void cpuidle_remove_device_sysfs(struct cpuidle_device *device);

View File

@ -119,11 +119,18 @@ static struct attribute_group cpuidle_attr_group = {
/**
* cpuidle_add_interface - add CPU global sysfs attributes
* @dev: the target device
*/
int cpuidle_add_interface(struct device *dev)
int cpuidle_add_interface(void)
{
return sysfs_create_group(&dev->kobj, &cpuidle_attr_group);
struct device *dev_root = bus_get_dev_root(&cpu_subsys);
int retval;
if (!dev_root)
return -EINVAL;
retval = sysfs_create_group(&dev_root->kobj, &cpuidle_attr_group);
put_device(dev_root);
return retval;
}
/**

View File

@ -3690,7 +3690,7 @@ static ssize_t qm_get_qos_value(struct hisi_qm *qm, const char *buf,
unsigned long *val,
unsigned int *fun_index)
{
struct bus_type *bus_type = qm->pdev->dev.bus;
const struct bus_type *bus_type = qm->pdev->dev.bus;
char tbuf_bdf[QM_DBG_READ_LEN] = {0};
char val_buf[QM_DBG_READ_LEN] = {0};
struct pci_dev *pdev;

View File

@ -57,7 +57,7 @@ static int adf_chr_drv_create(void)
return -EFAULT;
}
adf_ctl_drv.drv_class = class_create(THIS_MODULE, DEVICE_NAME);
adf_ctl_drv.drv_class = class_create(DEVICE_NAME);
if (IS_ERR(adf_ctl_drv.drv_class)) {
pr_err("QAT: class_create failed for adf_ctl\n");
goto err_chrdev_unreg;

View File

@ -1903,7 +1903,7 @@ bool schedule_cxl_memdev_detach(struct cxl_memdev *cxlmd)
EXPORT_SYMBOL_NS_GPL(schedule_cxl_memdev_detach, CXL);
/* for user tooling to ensure port disable work has completed */
static ssize_t flush_store(struct bus_type *bus, const char *buf, size_t count)
static ssize_t flush_store(const struct bus_type *bus, const char *buf, size_t count)
{
if (sysfs_streq(buf, "1")) {
flush_workqueue(cxl_bus_wq);

View File

@ -74,7 +74,7 @@ int __init dca_sysfs_init(void)
idr_init(&dca_idr);
spin_lock_init(&dca_idr_lock);
dca_class = class_create(THIS_MODULE, "dca");
dca_class = class_create("dca");
if (IS_ERR(dca_class)) {
idr_destroy(&dca_idr);
return PTR_ERR(dca_class);

View File

@ -469,7 +469,7 @@ ATTRIBUTE_GROUPS(devfreq_event);
static int __init devfreq_event_init(void)
{
devfreq_event_class = class_create(THIS_MODULE, "devfreq-event");
devfreq_event_class = class_create("devfreq-event");
if (IS_ERR(devfreq_event_class)) {
pr_err("%s: couldn't create class\n", __FILE__);
return PTR_ERR(devfreq_event_class);

View File

@ -1988,7 +1988,7 @@ DEFINE_SHOW_ATTRIBUTE(devfreq_summary);
static int __init devfreq_init(void)
{
devfreq_class = class_create(THIS_MODULE, "devfreq");
devfreq_class = class_create("devfreq");
if (IS_ERR(devfreq_class)) {
pr_err("%s: couldn't create class\n", __FILE__);
return PTR_ERR(devfreq_class);

View File

@ -314,7 +314,7 @@ static int dma_heap_init(void)
if (ret)
return ret;
dma_heap_class = class_create(THIS_MODULE, DEVNAME);
dma_heap_class = class_create(DEVNAME);
if (IS_ERR(dma_heap_class)) {
unregister_chrdev_region(dma_heap_devt, NUM_HEAP_MINORS);
return PTR_ERR(dma_heap_class);

View File

@ -16,7 +16,7 @@ extern void device_driver_detach(struct device *dev);
static ssize_t unbind_store(struct device_driver *drv, const char *buf, size_t count)
{
struct bus_type *bus = drv->bus;
const struct bus_type *bus = drv->bus;
struct device *dev;
int rc = -ENODEV;
@ -32,7 +32,7 @@ static DRIVER_ATTR_IGNORE_LOCKDEP(unbind, 0200, NULL, unbind_store);
static ssize_t bind_store(struct device_driver *drv, const char *buf, size_t count)
{
struct bus_type *bus = drv->bus;
const struct bus_type *bus = drv->bus;
struct device *dev;
struct device_driver *alt_drv = NULL;
int rc = -ENODEV;

View File

@ -228,8 +228,9 @@ static struct kobj_type ktype_device_ctrl = {
*/
int edac_device_register_sysfs_main_kobj(struct edac_device_ctl_info *edac_dev)
{
struct device *dev_root;
struct bus_type *edac_subsys;
int err;
int err = -ENODEV;
edac_dbg(1, "\n");
@ -247,15 +248,16 @@ int edac_device_register_sysfs_main_kobj(struct edac_device_ctl_info *edac_dev)
*/
edac_dev->owner = THIS_MODULE;
if (!try_module_get(edac_dev->owner)) {
err = -ENODEV;
if (!try_module_get(edac_dev->owner))
goto err_out;
}
/* register */
err = kobject_init_and_add(&edac_dev->kobj, &ktype_device_ctrl,
&edac_subsys->dev_root->kobj,
"%s", edac_dev->name);
dev_root = bus_get_dev_root(edac_subsys);
if (dev_root) {
err = kobject_init_and_add(&edac_dev->kobj, &ktype_device_ctrl,
&dev_root->kobj, "%s", edac_dev->name);
put_device(dev_root);
}
if (err) {
edac_dbg(1, "Failed to register '.../edac/%s'\n",
edac_dev->name);

Some files were not shown because too many files have changed in this diff Show More