Merge branch 'akpm' (incoming from Andrew Morton)

Merge fixes from Andrew Morton:
 "A bunch of fixes and one simple fbdev driver which missed the merge
  window because people will still talking about it (to no great
  effect)."

* emailed patches from Andrew Morton <akpm@linux-foundation.org>: (30 commits)
  aio: fix kioctx not being freed after cancellation at exit time
  mm/pagewalk.c: walk_page_range should avoid VM_PFNMAP areas
  drivers/rtc/rtc-max8998.c: check for pdata presence before dereferencing
  ocfs2: goto out_unlock if ocfs2_get_clusters_nocache() failed in ocfs2_fiemap()
  random: fix accounting race condition with lockless irq entropy_count update
  drivers/char/random.c: fix priming of last_data
  mm/memory_hotplug.c: fix printk format warnings
  nilfs2: fix issue of nilfs_set_page_dirty() for page at EOF boundary
  drivers/block/brd.c: fix brd_lookup_page() race
  fbdev: FB_GOLDFISH should depend on HAS_DMA
  drivers/rtc/rtc-pl031.c: pass correct pointer to free_irq()
  auditfilter.c: fix kernel-doc warnings
  aio: fix io_getevents documentation
  revert "selftest: add simple test for soft-dirty bit"
  drivers/leds/leds-ot200.c: fix error caused by shifted mask
  mm/THP: use pmd_populate() to update the pmd with pgtable_t pointer
  linux/kernel.h: fix kernel-doc warning
  mm compaction: fix of improper cache flush in migration code
  rapidio/tsi721: fix bug in MSI interrupt handling
  hfs: avoid crash in hfs_bnode_create
  ...
This commit is contained in:
Linus Torvalds 2013-05-24 18:12:15 -07:00
commit 9cf1848278
40 changed files with 1034 additions and 420 deletions

View File

@ -0,0 +1,25 @@
Simple Framebuffer
A simple frame-buffer describes a raw memory region that may be rendered to,
with the assumption that the display hardware has already been set up to scan
out from that buffer.
Required properties:
- compatible: "simple-framebuffer"
- reg: Should contain the location and size of the framebuffer memory.
- width: The width of the framebuffer in pixels.
- height: The height of the framebuffer in pixels.
- stride: The number of bytes in each line of the framebuffer.
- format: The format of the framebuffer surface. Valid values are:
- r5g6b5 (16-bit pixels, d[15:11]=r, d[10:5]=g, d[4:0]=b).
Example:
framebuffer {
compatible = "simple-framebuffer";
reg = <0x1d385000 (1600 * 1200 * 2)>;
width = <1600>;
height = <1200>;
stride = <(1600 * 2)>;
format = "r5g6b5";
};

View File

@ -79,20 +79,63 @@ master port that is used to communicate with devices within the network.
In order to initialize the RapidIO subsystem, a platform must initialize and
register at least one master port within the RapidIO network. To register mport
within the subsystem controller driver initialization code calls function
rio_register_mport() for each available master port. After all active master
ports are registered with a RapidIO subsystem, the rio_init_mports() routine
is called to perform enumeration and discovery.
rio_register_mport() for each available master port.
In the current PowerPC-based implementation a subsys_initcall() is specified to
perform controller initialization and mport registration. At the end it directly
calls rio_init_mports() to execute RapidIO enumeration and discovery.
RapidIO subsystem uses subsys_initcall() or device_initcall() to perform
controller initialization (depending on controller device type).
After all active master ports are registered with a RapidIO subsystem,
an enumeration and/or discovery routine may be called automatically or
by user-space command.
4. Enumeration and Discovery
----------------------------
When rio_init_mports() is called it scans a list of registered master ports and
calls an enumeration or discovery routine depending on the configured role of a
master port: host or agent.
4.1 Overview
------------
RapidIO subsystem configuration options allow users to specify enumeration and
discovery methods as statically linked components or loadable modules.
An enumeration/discovery method implementation and available input parameters
define how any given method can be attached to available RapidIO mports:
simply to all available mports OR individually to the specified mport device.
Depending on selected enumeration/discovery build configuration, there are
several methods to initiate an enumeration and/or discovery process:
(a) Statically linked enumeration and discovery process can be started
automatically during kernel initialization time using corresponding module
parameters. This was the original method used since introduction of RapidIO
subsystem. Now this method relies on enumerator module parameter which is
'rio-scan.scan' for existing basic enumeration/discovery method.
When automatic start of enumeration/discovery is used a user has to ensure
that all discovering endpoints are started before the enumerating endpoint
and are waiting for enumeration to be completed.
Configuration option CONFIG_RAPIDIO_DISC_TIMEOUT defines time that discovering
endpoint waits for enumeration to be completed. If the specified timeout
expires the discovery process is terminated without obtaining RapidIO network
information. NOTE: a timed out discovery process may be restarted later using
a user-space command as it is described later if the given endpoint was
enumerated successfully.
(b) Statically linked enumeration and discovery process can be started by
a command from user space. This initiation method provides more flexibility
for a system startup compared to the option (a) above. After all participating
endpoints have been successfully booted, an enumeration process shall be
started first by issuing a user-space command, after an enumeration is
completed a discovery process can be started on all remaining endpoints.
(c) Modular enumeration and discovery process can be started by a command from
user space. After an enumeration/discovery module is loaded, a network scan
process can be started by issuing a user-space command.
Similar to the option (b) above, an enumerator has to be started first.
(d) Modular enumeration and discovery process can be started by a module
initialization routine. In this case an enumerating module shall be loaded
first.
When a network scan process is started it calls an enumeration or discovery
routine depending on the configured role of a master port: host or agent.
Enumeration is performed by a master port if it is configured as a host port by
assigning a host device ID greater than or equal to zero. A host device ID is
@ -104,8 +147,58 @@ for it.
The enumeration and discovery routines use RapidIO maintenance transactions
to access the configuration space of devices.
The enumeration process is implemented according to the enumeration algorithm
outlined in the RapidIO Interconnect Specification: Annex I [1].
4.2 Automatic Start of Enumeration and Discovery
------------------------------------------------
Automatic enumeration/discovery start method is applicable only to built-in
enumeration/discovery RapidIO configuration selection. To enable automatic
enumeration/discovery start by existing basic enumerator method set use boot
command line parameter "rio-scan.scan=1".
This configuration requires synchronized start of all RapidIO endpoints that
form a network which will be enumerated/discovered. Discovering endpoints have
to be started before an enumeration starts to ensure that all RapidIO
controllers have been initialized and are ready to be discovered. Configuration
parameter CONFIG_RAPIDIO_DISC_TIMEOUT defines time (in seconds) which
a discovering endpoint will wait for enumeration to be completed.
When automatic enumeration/discovery start is selected, basic method's
initialization routine calls rio_init_mports() to perform enumeration or
discovery for all known mport devices.
Depending on RapidIO network size and configuration this automatic
enumeration/discovery start method may be difficult to use due to the
requirement for synchronized start of all endpoints.
4.3 User-space Start of Enumeration and Discovery
-------------------------------------------------
User-space start of enumeration and discovery can be used with built-in and
modular build configurations. For user-space controlled start RapidIO subsystem
creates the sysfs write-only attribute file '/sys/bus/rapidio/scan'. To initiate
an enumeration or discovery process on specific mport device, a user needs to
write mport_ID (not RapidIO destination ID) into that file. The mport_ID is a
sequential number (0 ... RIO_MAX_MPORTS) assigned during mport device
registration. For example for machine with single RapidIO controller, mport_ID
for that controller always will be 0.
To initiate RapidIO enumeration/discovery on all available mports a user may
write '-1' (or RIO_MPORT_ANY) into the scan attribute file.
4.4 Basic Enumeration Method
----------------------------
This is an original enumeration/discovery method which is available since
first release of RapidIO subsystem code. The enumeration process is
implemented according to the enumeration algorithm outlined in the RapidIO
Interconnect Specification: Annex I [1].
This method can be configured as statically linked or loadable module.
The method's single parameter "scan" allows to trigger the enumeration/discovery
process from module initialization routine.
This enumeration/discovery method can be started only once and does not support
unloading if it is built as a module.
The enumeration process traverses the network using a recursive depth-first
algorithm. When a new device is found, the enumerator takes ownership of that
@ -160,6 +253,19 @@ time period. If this wait time period expires before enumeration is completed,
an agent skips RapidIO discovery and continues with remaining kernel
initialization.
4.5 Adding New Enumeration/Discovery Method
-------------------------------------------
RapidIO subsystem code organization allows addition of new enumeration/discovery
methods as new configuration options without significant impact to to the core
RapidIO code.
A new enumeration/discovery method has to be attached to one or more mport
devices before an enumeration/discovery process can be started. Normally,
method's module initialization routine calls rio_register_scan() to attach
an enumerator to a specified mport device (or devices). The basic enumerator
implementation demonstrates this process.
5. References
-------------

View File

@ -88,3 +88,20 @@ that exports additional attributes.
IDT_GEN2:
errlog - reads contents of device error log until it is empty.
5. RapidIO Bus Attributes
-------------------------
RapidIO bus subdirectory /sys/bus/rapidio implements the following bus-specific
attribute:
scan - allows to trigger enumeration discovery process from user space. This
is a write-only attribute. To initiate an enumeration or discovery
process on specific mport device, a user needs to write mport_ID (not
RapidIO destination ID) into this file. The mport_ID is a sequential
number (0 ... RIO_MAX_MPORTS) assigned to the mport device.
For example, for a machine with a single RapidIO controller, mport_ID
for that controller always will be 0.
To initiate RapidIO enumeration/discovery on all available mports
a user must write '-1' (or RIO_MPORT_ANY) into this attribute file.

View File

@ -117,13 +117,13 @@ static struct page *brd_insert_page(struct brd_device *brd, sector_t sector)
spin_lock(&brd->brd_lock);
idx = sector >> PAGE_SECTORS_SHIFT;
page->index = idx;
if (radix_tree_insert(&brd->brd_pages, idx, page)) {
__free_page(page);
page = radix_tree_lookup(&brd->brd_pages, idx);
BUG_ON(!page);
BUG_ON(page->index != idx);
} else
page->index = idx;
}
spin_unlock(&brd->brd_lock);
radix_tree_preload_end();

View File

@ -1160,8 +1160,7 @@ static int ace_probe(struct platform_device *dev)
dev_dbg(&dev->dev, "ace_probe(%p)\n", dev);
/* device id and bus width */
of_property_read_u32(dev->dev.of_node, "port-number", &id);
if (id < 0)
if (of_property_read_u32(dev->dev.of_node, "port-number", &id))
id = 0;
if (of_find_property(dev->dev.of_node, "8-bit", NULL))
bus_width = ACE_BUS_WIDTH_8;

View File

@ -865,16 +865,24 @@ static size_t account(struct entropy_store *r, size_t nbytes, int min,
if (r->entropy_count / 8 < min + reserved) {
nbytes = 0;
} else {
int entropy_count, orig;
retry:
entropy_count = orig = ACCESS_ONCE(r->entropy_count);
/* If limited, never pull more than available */
if (r->limit && nbytes + reserved >= r->entropy_count / 8)
nbytes = r->entropy_count/8 - reserved;
if (r->limit && nbytes + reserved >= entropy_count / 8)
nbytes = entropy_count/8 - reserved;
if (r->entropy_count / 8 >= nbytes + reserved)
r->entropy_count -= nbytes*8;
else
r->entropy_count = reserved;
if (entropy_count / 8 >= nbytes + reserved) {
entropy_count -= nbytes*8;
if (cmpxchg(&r->entropy_count, orig, entropy_count) != orig)
goto retry;
} else {
entropy_count = reserved;
if (cmpxchg(&r->entropy_count, orig, entropy_count) != orig)
goto retry;
}
if (r->entropy_count < random_write_wakeup_thresh)
if (entropy_count < random_write_wakeup_thresh)
wakeup_write = 1;
}
@ -957,10 +965,23 @@ static ssize_t extract_entropy(struct entropy_store *r, void *buf,
{
ssize_t ret = 0, i;
__u8 tmp[EXTRACT_SIZE];
unsigned long flags;
/* if last_data isn't primed, we need EXTRACT_SIZE extra bytes */
if (fips_enabled && !r->last_data_init)
nbytes += EXTRACT_SIZE;
if (fips_enabled) {
spin_lock_irqsave(&r->lock, flags);
if (!r->last_data_init) {
r->last_data_init = true;
spin_unlock_irqrestore(&r->lock, flags);
trace_extract_entropy(r->name, EXTRACT_SIZE,
r->entropy_count, _RET_IP_);
xfer_secondary_pool(r, EXTRACT_SIZE);
extract_buf(r, tmp);
spin_lock_irqsave(&r->lock, flags);
memcpy(r->last_data, tmp, EXTRACT_SIZE);
}
spin_unlock_irqrestore(&r->lock, flags);
}
trace_extract_entropy(r->name, nbytes, r->entropy_count, _RET_IP_);
xfer_secondary_pool(r, nbytes);
@ -970,19 +991,6 @@ static ssize_t extract_entropy(struct entropy_store *r, void *buf,
extract_buf(r, tmp);
if (fips_enabled) {
unsigned long flags;
/* prime last_data value if need be, per fips 140-2 */
if (!r->last_data_init) {
spin_lock_irqsave(&r->lock, flags);
memcpy(r->last_data, tmp, EXTRACT_SIZE);
r->last_data_init = true;
nbytes -= EXTRACT_SIZE;
spin_unlock_irqrestore(&r->lock, flags);
extract_buf(r, tmp);
}
spin_lock_irqsave(&r->lock, flags);
if (!memcmp(tmp, r->last_data, EXTRACT_SIZE))
panic("Hardware RNG duplicated output!\n");

View File

@ -47,37 +47,37 @@ static struct ot200_led leds[] = {
{
.name = "led_1",
.port = 0x49,
.mask = BIT(7),
.mask = BIT(6),
},
{
.name = "led_2",
.port = 0x49,
.mask = BIT(6),
.mask = BIT(5),
},
{
.name = "led_3",
.port = 0x49,
.mask = BIT(5),
.mask = BIT(4),
},
{
.name = "led_4",
.port = 0x49,
.mask = BIT(4),
.mask = BIT(3),
},
{
.name = "led_5",
.port = 0x49,
.mask = BIT(3),
.mask = BIT(2),
},
{
.name = "led_6",
.port = 0x49,
.mask = BIT(2),
.mask = BIT(1),
},
{
.name = "led_7",
.port = 0x49,
.mask = BIT(1),
.mask = BIT(0),
}
};

View File

@ -47,4 +47,24 @@ config RAPIDIO_DEBUG
If you are unsure about this, say N here.
choice
prompt "Enumeration method"
depends on RAPIDIO
default RAPIDIO_ENUM_BASIC
help
There are different enumeration and discovery mechanisms offered
for RapidIO subsystem. You may select single built-in method or
or any number of methods to be built as modules.
Selecting a built-in method disables use of loadable methods.
If unsure, select Basic built-in.
config RAPIDIO_ENUM_BASIC
tristate "Basic"
help
This option includes basic RapidIO fabric enumeration and discovery
mechanism similar to one described in RapidIO specification Annex 1.
endchoice
source "drivers/rapidio/switches/Kconfig"

View File

@ -1,7 +1,8 @@
#
# Makefile for RapidIO interconnect services
#
obj-y += rio.o rio-access.o rio-driver.o rio-scan.o rio-sysfs.o
obj-y += rio.o rio-access.o rio-driver.o rio-sysfs.o
obj-$(CONFIG_RAPIDIO_ENUM_BASIC) += rio-scan.o
obj-$(CONFIG_RAPIDIO) += switches/
obj-$(CONFIG_RAPIDIO) += devices/

View File

@ -471,6 +471,10 @@ static irqreturn_t tsi721_irqhandler(int irq, void *ptr)
u32 intval;
u32 ch_inte;
/* For MSI mode disable all device-level interrupts */
if (priv->flags & TSI721_USING_MSI)
iowrite32(0, priv->regs + TSI721_DEV_INTE);
dev_int = ioread32(priv->regs + TSI721_DEV_INT);
if (!dev_int)
return IRQ_NONE;
@ -560,6 +564,14 @@ static irqreturn_t tsi721_irqhandler(int irq, void *ptr)
}
}
#endif
/* For MSI mode re-enable device-level interrupts */
if (priv->flags & TSI721_USING_MSI) {
dev_int = TSI721_DEV_INT_SR2PC_CH | TSI721_DEV_INT_SRIO |
TSI721_DEV_INT_SMSG_CH | TSI721_DEV_INT_BDMA_CH;
iowrite32(dev_int, priv->regs + TSI721_DEV_INTE);
}
return IRQ_HANDLED;
}

View File

@ -164,6 +164,13 @@ void rio_unregister_driver(struct rio_driver *rdrv)
driver_unregister(&rdrv->driver);
}
void rio_attach_device(struct rio_dev *rdev)
{
rdev->dev.bus = &rio_bus_type;
rdev->dev.parent = &rio_bus;
}
EXPORT_SYMBOL_GPL(rio_attach_device);
/**
* rio_match_bus - Tell if a RIO device structure has a matching RIO driver device id structure
* @dev: the standard device structure to match against
@ -200,6 +207,7 @@ struct bus_type rio_bus_type = {
.name = "rapidio",
.match = rio_match_bus,
.dev_attrs = rio_dev_attrs,
.bus_attrs = rio_bus_attrs,
.probe = rio_device_probe,
.remove = rio_device_remove,
};

View File

@ -37,12 +37,8 @@
#include "rio.h"
LIST_HEAD(rio_devices);
static void rio_init_em(struct rio_dev *rdev);
DEFINE_SPINLOCK(rio_global_list_lock);
static int next_destid = 0;
static int next_comptag = 1;
@ -326,127 +322,6 @@ static int rio_is_switch(struct rio_dev *rdev)
return 0;
}
/**
* rio_switch_init - Sets switch operations for a particular vendor switch
* @rdev: RIO device
* @do_enum: Enumeration/Discovery mode flag
*
* Searches the RIO switch ops table for known switch types. If the vid
* and did match a switch table entry, then call switch initialization
* routine to setup switch-specific routines.
*/
static void rio_switch_init(struct rio_dev *rdev, int do_enum)
{
struct rio_switch_ops *cur = __start_rio_switch_ops;
struct rio_switch_ops *end = __end_rio_switch_ops;
while (cur < end) {
if ((cur->vid == rdev->vid) && (cur->did == rdev->did)) {
pr_debug("RIO: calling init routine for %s\n",
rio_name(rdev));
cur->init_hook(rdev, do_enum);
break;
}
cur++;
}
if ((cur >= end) && (rdev->pef & RIO_PEF_STD_RT)) {
pr_debug("RIO: adding STD routing ops for %s\n",
rio_name(rdev));
rdev->rswitch->add_entry = rio_std_route_add_entry;
rdev->rswitch->get_entry = rio_std_route_get_entry;
rdev->rswitch->clr_table = rio_std_route_clr_table;
}
if (!rdev->rswitch->add_entry || !rdev->rswitch->get_entry)
printk(KERN_ERR "RIO: missing routing ops for %s\n",
rio_name(rdev));
}
/**
* rio_add_device- Adds a RIO device to the device model
* @rdev: RIO device
*
* Adds the RIO device to the global device list and adds the RIO
* device to the RIO device list. Creates the generic sysfs nodes
* for an RIO device.
*/
static int rio_add_device(struct rio_dev *rdev)
{
int err;
err = device_add(&rdev->dev);
if (err)
return err;
spin_lock(&rio_global_list_lock);
list_add_tail(&rdev->global_list, &rio_devices);
spin_unlock(&rio_global_list_lock);
rio_create_sysfs_dev_files(rdev);
return 0;
}
/**
* rio_enable_rx_tx_port - enable input receiver and output transmitter of
* given port
* @port: Master port associated with the RIO network
* @local: local=1 select local port otherwise a far device is reached
* @destid: Destination ID of the device to check host bit
* @hopcount: Number of hops to reach the target
* @port_num: Port (-number on switch) to enable on a far end device
*
* Returns 0 or 1 from on General Control Command and Status Register
* (EXT_PTR+0x3C)
*/
inline int rio_enable_rx_tx_port(struct rio_mport *port,
int local, u16 destid,
u8 hopcount, u8 port_num) {
#ifdef CONFIG_RAPIDIO_ENABLE_RX_TX_PORTS
u32 regval;
u32 ext_ftr_ptr;
/*
* enable rx input tx output port
*/
pr_debug("rio_enable_rx_tx_port(local = %d, destid = %d, hopcount = "
"%d, port_num = %d)\n", local, destid, hopcount, port_num);
ext_ftr_ptr = rio_mport_get_physefb(port, local, destid, hopcount);
if (local) {
rio_local_read_config_32(port, ext_ftr_ptr +
RIO_PORT_N_CTL_CSR(0),
&regval);
} else {
if (rio_mport_read_config_32(port, destid, hopcount,
ext_ftr_ptr + RIO_PORT_N_CTL_CSR(port_num), &regval) < 0)
return -EIO;
}
if (regval & RIO_PORT_N_CTL_P_TYP_SER) {
/* serial */
regval = regval | RIO_PORT_N_CTL_EN_RX_SER
| RIO_PORT_N_CTL_EN_TX_SER;
} else {
/* parallel */
regval = regval | RIO_PORT_N_CTL_EN_RX_PAR
| RIO_PORT_N_CTL_EN_TX_PAR;
}
if (local) {
rio_local_write_config_32(port, ext_ftr_ptr +
RIO_PORT_N_CTL_CSR(0), regval);
} else {
if (rio_mport_write_config_32(port, destid, hopcount,
ext_ftr_ptr + RIO_PORT_N_CTL_CSR(port_num), regval) < 0)
return -EIO;
}
#endif
return 0;
}
/**
* rio_setup_device- Allocates and sets up a RIO device
* @net: RIO network
@ -587,8 +462,7 @@ static struct rio_dev *rio_setup_device(struct rio_net *net,
rdev->destid);
}
rdev->dev.bus = &rio_bus_type;
rdev->dev.parent = &rio_bus;
rio_attach_device(rdev);
device_initialize(&rdev->dev);
rdev->dev.release = rio_release_dev;
@ -1260,19 +1134,30 @@ static void rio_pw_enable(struct rio_mport *port, int enable)
/**
* rio_enum_mport- Start enumeration through a master port
* @mport: Master port to send transactions
* @flags: Enumeration control flags
*
* Starts the enumeration process. If somebody has enumerated our
* master port device, then give up. If not and we have an active
* link, then start recursive peer enumeration. Returns %0 if
* enumeration succeeds or %-EBUSY if enumeration fails.
*/
int rio_enum_mport(struct rio_mport *mport)
int rio_enum_mport(struct rio_mport *mport, u32 flags)
{
struct rio_net *net = NULL;
int rc = 0;
printk(KERN_INFO "RIO: enumerate master port %d, %s\n", mport->id,
mport->name);
/*
* To avoid multiple start requests (repeat enumeration is not supported
* by this method) check if enumeration/discovery was performed for this
* mport: if mport was added into the list of mports for a net exit
* with error.
*/
if (mport->nnode.next || mport->nnode.prev)
return -EBUSY;
/* If somebody else enumerated our master port device, bail. */
if (rio_enum_host(mport) < 0) {
printk(KERN_INFO
@ -1362,14 +1247,16 @@ static void rio_build_route_tables(struct rio_net *net)
/**
* rio_disc_mport- Start discovery through a master port
* @mport: Master port to send transactions
* @flags: discovery control flags
*
* Starts the discovery process. If we have an active link,
* then wait for the signal that enumeration is complete.
* then wait for the signal that enumeration is complete (if wait
* is allowed).
* When enumeration completion is signaled, start recursive
* peer discovery. Returns %0 if discovery succeeds or %-EBUSY
* on failure.
*/
int rio_disc_mport(struct rio_mport *mport)
int rio_disc_mport(struct rio_mport *mport, u32 flags)
{
struct rio_net *net = NULL;
unsigned long to_end;
@ -1379,6 +1266,11 @@ int rio_disc_mport(struct rio_mport *mport)
/* If master port has an active link, allocate net and discover peers */
if (rio_mport_is_active(mport)) {
if (rio_enum_complete(mport))
goto enum_done;
else if (flags & RIO_SCAN_ENUM_NO_WAIT)
return -EAGAIN;
pr_debug("RIO: wait for enumeration to complete...\n");
to_end = jiffies + CONFIG_RAPIDIO_DISC_TIMEOUT * HZ;
@ -1421,3 +1313,41 @@ enum_done:
bail:
return -EBUSY;
}
static struct rio_scan rio_scan_ops = {
.enumerate = rio_enum_mport,
.discover = rio_disc_mport,
};
static bool scan;
module_param(scan, bool, 0);
MODULE_PARM_DESC(scan, "Start RapidIO network enumeration/discovery "
"(default = 0)");
/**
* rio_basic_attach:
*
* When this enumeration/discovery method is loaded as a module this function
* registers its specific enumeration and discover routines for all available
* RapidIO mport devices. The "scan" command line parameter controls ability of
* the module to start RapidIO enumeration/discovery automatically.
*
* Returns 0 for success or -EIO if unable to register itself.
*
* This enumeration/discovery method cannot be unloaded and therefore does not
* provide a matching cleanup_module routine.
*/
static int __init rio_basic_attach(void)
{
if (rio_register_scan(RIO_MPORT_ANY, &rio_scan_ops))
return -EIO;
if (scan)
rio_init_mports();
return 0;
}
late_initcall(rio_basic_attach);
MODULE_DESCRIPTION("Basic RapidIO enumeration/discovery");
MODULE_LICENSE("GPL");

View File

@ -285,3 +285,48 @@ void rio_remove_sysfs_dev_files(struct rio_dev *rdev)
rdev->rswitch->sw_sysfs(rdev, RIO_SW_SYSFS_REMOVE);
}
}
static ssize_t bus_scan_store(struct bus_type *bus, const char *buf,
size_t count)
{
long val;
struct rio_mport *port = NULL;
int rc;
if (kstrtol(buf, 0, &val) < 0)
return -EINVAL;
if (val == RIO_MPORT_ANY) {
rc = rio_init_mports();
goto exit;
}
if (val < 0 || val >= RIO_MAX_MPORTS)
return -EINVAL;
port = rio_find_mport((int)val);
if (!port) {
pr_debug("RIO: %s: mport_%d not available\n",
__func__, (int)val);
return -EINVAL;
}
if (!port->nscan)
return -EINVAL;
if (port->host_deviceid >= 0)
rc = port->nscan->enumerate(port, 0);
else
rc = port->nscan->discover(port, RIO_SCAN_ENUM_NO_WAIT);
exit:
if (!rc)
rc = count;
return rc;
}
struct bus_attribute rio_bus_attrs[] = {
__ATTR(scan, (S_IWUSR|S_IWGRP), NULL, bus_scan_store),
__ATTR_NULL
};

View File

@ -31,7 +31,11 @@
#include "rio.h"
static LIST_HEAD(rio_devices);
static DEFINE_SPINLOCK(rio_global_list_lock);
static LIST_HEAD(rio_mports);
static DEFINE_MUTEX(rio_mport_list_lock);
static unsigned char next_portid;
static DEFINE_SPINLOCK(rio_mmap_lock);
@ -52,6 +56,32 @@ u16 rio_local_get_device_id(struct rio_mport *port)
return (RIO_GET_DID(port->sys_size, result));
}
/**
* rio_add_device- Adds a RIO device to the device model
* @rdev: RIO device
*
* Adds the RIO device to the global device list and adds the RIO
* device to the RIO device list. Creates the generic sysfs nodes
* for an RIO device.
*/
int rio_add_device(struct rio_dev *rdev)
{
int err;
err = device_add(&rdev->dev);
if (err)
return err;
spin_lock(&rio_global_list_lock);
list_add_tail(&rdev->global_list, &rio_devices);
spin_unlock(&rio_global_list_lock);
rio_create_sysfs_dev_files(rdev);
return 0;
}
EXPORT_SYMBOL_GPL(rio_add_device);
/**
* rio_request_inb_mbox - request inbound mailbox service
* @mport: RIO master port from which to allocate the mailbox resource
@ -489,6 +519,7 @@ rio_mport_get_physefb(struct rio_mport *port, int local,
return ext_ftr_ptr;
}
EXPORT_SYMBOL_GPL(rio_mport_get_physefb);
/**
* rio_get_comptag - Begin or continue searching for a RIO device by component tag
@ -521,6 +552,7 @@ exit:
spin_unlock(&rio_global_list_lock);
return rdev;
}
EXPORT_SYMBOL_GPL(rio_get_comptag);
/**
* rio_set_port_lockout - Sets/clears LOCKOUT bit (RIO EM 1.3) for a switch port.
@ -545,6 +577,107 @@ int rio_set_port_lockout(struct rio_dev *rdev, u32 pnum, int lock)
regval);
return 0;
}
EXPORT_SYMBOL_GPL(rio_set_port_lockout);
/**
* rio_switch_init - Sets switch operations for a particular vendor switch
* @rdev: RIO device
* @do_enum: Enumeration/Discovery mode flag
*
* Searches the RIO switch ops table for known switch types. If the vid
* and did match a switch table entry, then call switch initialization
* routine to setup switch-specific routines.
*/
void rio_switch_init(struct rio_dev *rdev, int do_enum)
{
struct rio_switch_ops *cur = __start_rio_switch_ops;
struct rio_switch_ops *end = __end_rio_switch_ops;
while (cur < end) {
if ((cur->vid == rdev->vid) && (cur->did == rdev->did)) {
pr_debug("RIO: calling init routine for %s\n",
rio_name(rdev));
cur->init_hook(rdev, do_enum);
break;
}
cur++;
}
if ((cur >= end) && (rdev->pef & RIO_PEF_STD_RT)) {
pr_debug("RIO: adding STD routing ops for %s\n",
rio_name(rdev));
rdev->rswitch->add_entry = rio_std_route_add_entry;
rdev->rswitch->get_entry = rio_std_route_get_entry;
rdev->rswitch->clr_table = rio_std_route_clr_table;
}
if (!rdev->rswitch->add_entry || !rdev->rswitch->get_entry)
printk(KERN_ERR "RIO: missing routing ops for %s\n",
rio_name(rdev));
}
EXPORT_SYMBOL_GPL(rio_switch_init);
/**
* rio_enable_rx_tx_port - enable input receiver and output transmitter of
* given port
* @port: Master port associated with the RIO network
* @local: local=1 select local port otherwise a far device is reached
* @destid: Destination ID of the device to check host bit
* @hopcount: Number of hops to reach the target
* @port_num: Port (-number on switch) to enable on a far end device
*
* Returns 0 or 1 from on General Control Command and Status Register
* (EXT_PTR+0x3C)
*/
int rio_enable_rx_tx_port(struct rio_mport *port,
int local, u16 destid,
u8 hopcount, u8 port_num)
{
#ifdef CONFIG_RAPIDIO_ENABLE_RX_TX_PORTS
u32 regval;
u32 ext_ftr_ptr;
/*
* enable rx input tx output port
*/
pr_debug("rio_enable_rx_tx_port(local = %d, destid = %d, hopcount = "
"%d, port_num = %d)\n", local, destid, hopcount, port_num);
ext_ftr_ptr = rio_mport_get_physefb(port, local, destid, hopcount);
if (local) {
rio_local_read_config_32(port, ext_ftr_ptr +
RIO_PORT_N_CTL_CSR(0),
&regval);
} else {
if (rio_mport_read_config_32(port, destid, hopcount,
ext_ftr_ptr + RIO_PORT_N_CTL_CSR(port_num), &regval) < 0)
return -EIO;
}
if (regval & RIO_PORT_N_CTL_P_TYP_SER) {
/* serial */
regval = regval | RIO_PORT_N_CTL_EN_RX_SER
| RIO_PORT_N_CTL_EN_TX_SER;
} else {
/* parallel */
regval = regval | RIO_PORT_N_CTL_EN_RX_PAR
| RIO_PORT_N_CTL_EN_TX_PAR;
}
if (local) {
rio_local_write_config_32(port, ext_ftr_ptr +
RIO_PORT_N_CTL_CSR(0), regval);
} else {
if (rio_mport_write_config_32(port, destid, hopcount,
ext_ftr_ptr + RIO_PORT_N_CTL_CSR(port_num), regval) < 0)
return -EIO;
}
#endif
return 0;
}
EXPORT_SYMBOL_GPL(rio_enable_rx_tx_port);
/**
* rio_chk_dev_route - Validate route to the specified device.
@ -610,6 +743,7 @@ rio_mport_chk_dev_access(struct rio_mport *mport, u16 destid, u8 hopcount)
return 0;
}
EXPORT_SYMBOL_GPL(rio_mport_chk_dev_access);
/**
* rio_chk_dev_access - Validate access to the specified device.
@ -941,6 +1075,7 @@ rio_mport_get_efb(struct rio_mport *port, int local, u16 destid,
return RIO_GET_BLOCK_ID(reg_val);
}
}
EXPORT_SYMBOL_GPL(rio_mport_get_efb);
/**
* rio_mport_get_feature - query for devices' extended features
@ -997,6 +1132,7 @@ rio_mport_get_feature(struct rio_mport * port, int local, u16 destid,
return 0;
}
EXPORT_SYMBOL_GPL(rio_mport_get_feature);
/**
* rio_get_asm - Begin or continue searching for a RIO device by vid/did/asm_vid/asm_did
@ -1246,6 +1382,95 @@ EXPORT_SYMBOL_GPL(rio_dma_prep_slave_sg);
#endif /* CONFIG_RAPIDIO_DMA_ENGINE */
/**
* rio_find_mport - find RIO mport by its ID
* @mport_id: number (ID) of mport device
*
* Given a RIO mport number, the desired mport is located
* in the global list of mports. If the mport is found, a pointer to its
* data structure is returned. If no mport is found, %NULL is returned.
*/
struct rio_mport *rio_find_mport(int mport_id)
{
struct rio_mport *port;
mutex_lock(&rio_mport_list_lock);
list_for_each_entry(port, &rio_mports, node) {
if (port->id == mport_id)
goto found;
}
port = NULL;
found:
mutex_unlock(&rio_mport_list_lock);
return port;
}
/**
* rio_register_scan - enumeration/discovery method registration interface
* @mport_id: mport device ID for which fabric scan routine has to be set
* (RIO_MPORT_ANY = set for all available mports)
* @scan_ops: enumeration/discovery control structure
*
* Assigns enumeration or discovery method to the specified mport device (or all
* available mports if RIO_MPORT_ANY is specified).
* Returns error if the mport already has an enumerator attached to it.
* In case of RIO_MPORT_ANY ignores ports with valid scan routines and returns
* an error if was unable to find at least one available mport.
*/
int rio_register_scan(int mport_id, struct rio_scan *scan_ops)
{
struct rio_mport *port;
int rc = -EBUSY;
mutex_lock(&rio_mport_list_lock);
list_for_each_entry(port, &rio_mports, node) {
if (port->id == mport_id || mport_id == RIO_MPORT_ANY) {
if (port->nscan && mport_id == RIO_MPORT_ANY)
continue;
else if (port->nscan)
break;
port->nscan = scan_ops;
rc = 0;
if (mport_id != RIO_MPORT_ANY)
break;
}
}
mutex_unlock(&rio_mport_list_lock);
return rc;
}
EXPORT_SYMBOL_GPL(rio_register_scan);
/**
* rio_unregister_scan - removes enumeration/discovery method from mport
* @mport_id: mport device ID for which fabric scan routine has to be
* unregistered (RIO_MPORT_ANY = set for all available mports)
*
* Removes enumeration or discovery method assigned to the specified mport
* device (or all available mports if RIO_MPORT_ANY is specified).
*/
int rio_unregister_scan(int mport_id)
{
struct rio_mport *port;
mutex_lock(&rio_mport_list_lock);
list_for_each_entry(port, &rio_mports, node) {
if (port->id == mport_id || mport_id == RIO_MPORT_ANY) {
if (port->nscan)
port->nscan = NULL;
if (mport_id != RIO_MPORT_ANY)
break;
}
}
mutex_unlock(&rio_mport_list_lock);
return 0;
}
EXPORT_SYMBOL_GPL(rio_unregister_scan);
static void rio_fixup_device(struct rio_dev *dev)
{
}
@ -1274,7 +1499,7 @@ static void disc_work_handler(struct work_struct *_work)
work = container_of(_work, struct rio_disc_work, work);
pr_debug("RIO: discovery work for mport %d %s\n",
work->mport->id, work->mport->name);
rio_disc_mport(work->mport);
work->mport->nscan->discover(work->mport, 0);
}
int rio_init_mports(void)
@ -1290,12 +1515,15 @@ int rio_init_mports(void)
* First, run enumerations and check if we need to perform discovery
* on any of the registered mports.
*/
mutex_lock(&rio_mport_list_lock);
list_for_each_entry(port, &rio_mports, node) {
if (port->host_deviceid >= 0)
rio_enum_mport(port);
else
if (port->host_deviceid >= 0) {
if (port->nscan)
port->nscan->enumerate(port, 0);
} else
n++;
}
mutex_unlock(&rio_mport_list_lock);
if (!n)
goto no_disc;
@ -1322,14 +1550,16 @@ int rio_init_mports(void)
}
n = 0;
mutex_lock(&rio_mport_list_lock);
list_for_each_entry(port, &rio_mports, node) {
if (port->host_deviceid < 0) {
if (port->host_deviceid < 0 && port->nscan) {
work[n].mport = port;
INIT_WORK(&work[n].work, disc_work_handler);
queue_work(rio_wq, &work[n].work);
n++;
}
}
mutex_unlock(&rio_mport_list_lock);
flush_workqueue(rio_wq);
pr_debug("RIO: destroy discovery workqueue\n");
@ -1342,8 +1572,6 @@ no_disc:
return 0;
}
device_initcall_sync(rio_init_mports);
static int hdids[RIO_MAX_MPORTS + 1];
static int rio_get_hdid(int index)
@ -1371,7 +1599,10 @@ int rio_register_mport(struct rio_mport *port)
port->id = next_portid++;
port->host_deviceid = rio_get_hdid(port->id);
port->nscan = NULL;
mutex_lock(&rio_mport_list_lock);
list_add_tail(&port->node, &rio_mports);
mutex_unlock(&rio_mport_list_lock);
return 0;
}
@ -1386,3 +1617,4 @@ EXPORT_SYMBOL_GPL(rio_request_inb_mbox);
EXPORT_SYMBOL_GPL(rio_release_inb_mbox);
EXPORT_SYMBOL_GPL(rio_request_outb_mbox);
EXPORT_SYMBOL_GPL(rio_release_outb_mbox);
EXPORT_SYMBOL_GPL(rio_init_mports);

View File

@ -15,6 +15,7 @@
#include <linux/rio.h>
#define RIO_MAX_CHK_RETRY 3
#define RIO_MPORT_ANY (-1)
/* Functions internal to the RIO core code */
@ -27,8 +28,6 @@ extern u32 rio_mport_get_efb(struct rio_mport *port, int local, u16 destid,
extern int rio_mport_chk_dev_access(struct rio_mport *mport, u16 destid,
u8 hopcount);
extern int rio_create_sysfs_dev_files(struct rio_dev *rdev);
extern int rio_enum_mport(struct rio_mport *mport);
extern int rio_disc_mport(struct rio_mport *mport);
extern int rio_std_route_add_entry(struct rio_mport *mport, u16 destid,
u8 hopcount, u16 table, u16 route_destid,
u8 route_port);
@ -39,10 +38,18 @@ extern int rio_std_route_clr_table(struct rio_mport *mport, u16 destid,
u8 hopcount, u16 table);
extern int rio_set_port_lockout(struct rio_dev *rdev, u32 pnum, int lock);
extern struct rio_dev *rio_get_comptag(u32 comp_tag, struct rio_dev *from);
extern int rio_add_device(struct rio_dev *rdev);
extern void rio_switch_init(struct rio_dev *rdev, int do_enum);
extern int rio_enable_rx_tx_port(struct rio_mport *port, int local, u16 destid,
u8 hopcount, u8 port_num);
extern int rio_register_scan(int mport_id, struct rio_scan *scan_ops);
extern int rio_unregister_scan(int mport_id);
extern void rio_attach_device(struct rio_dev *rdev);
extern struct rio_mport *rio_find_mport(int mport_id);
/* Structures internal to the RIO core code */
extern struct device_attribute rio_dev_attrs[];
extern spinlock_t rio_global_list_lock;
extern struct bus_attribute rio_bus_attrs[];
extern struct rio_switch_ops __start_rio_switch_ops[];
extern struct rio_switch_ops __end_rio_switch_ops[];

View File

@ -285,7 +285,7 @@ static int max8998_rtc_probe(struct platform_device *pdev)
info->irq, ret);
dev_info(&pdev->dev, "RTC CHIP NAME: %s\n", pdev->id_entry->name);
if (pdata->rtc_delay) {
if (pdata && pdata->rtc_delay) {
info->lp3974_bug_workaround = true;
dev_warn(&pdev->dev, "LP3974 with RTC REGERR option."
" RTC updates will be extremely slow.\n");

View File

@ -306,7 +306,7 @@ static int pl031_remove(struct amba_device *adev)
struct pl031_local *ldata = dev_get_drvdata(&adev->dev);
amba_set_drvdata(adev, NULL);
free_irq(adev->irq[0], ldata->rtc);
free_irq(adev->irq[0], ldata);
rtc_device_unregister(ldata->rtc);
iounmap(ldata->base);
kfree(ldata);

View File

@ -2199,7 +2199,7 @@ config FB_XILINX
config FB_GOLDFISH
tristate "Goldfish Framebuffer"
depends on FB
depends on FB && HAS_DMA
select FB_CFB_FILLRECT
select FB_CFB_COPYAREA
select FB_CFB_IMAGEBLIT
@ -2453,6 +2453,23 @@ config FB_HYPERV
help
This framebuffer driver supports Microsoft Hyper-V Synthetic Video.
config FB_SIMPLE
bool "Simple framebuffer support"
depends on (FB = y) && OF
select FB_CFB_FILLRECT
select FB_CFB_COPYAREA
select FB_CFB_IMAGEBLIT
help
Say Y if you want support for a simple frame-buffer.
This driver assumes that the display hardware has been initialized
before the kernel boots, and the kernel will simply render to the
pre-allocated frame buffer surface.
Configuration re: surface address, size, and format must be provided
through device tree, or potentially plain old platform data in the
future.
source "drivers/video/omap/Kconfig"
source "drivers/video/omap2/Kconfig"
source "drivers/video/exynos/Kconfig"

View File

@ -166,6 +166,7 @@ obj-$(CONFIG_FB_MX3) += mx3fb.o
obj-$(CONFIG_FB_DA8XX) += da8xx-fb.o
obj-$(CONFIG_FB_MXS) += mxsfb.o
obj-$(CONFIG_FB_SSD1307) += ssd1307fb.o
obj-$(CONFIG_FB_SIMPLE) += simplefb.o
# the test framebuffer is last
obj-$(CONFIG_FB_VIRTUAL) += vfb.o

234
drivers/video/simplefb.c Normal file
View File

@ -0,0 +1,234 @@
/*
* Simplest possible simple frame-buffer driver, as a platform device
*
* Copyright (c) 2013, Stephen Warren
*
* Based on q40fb.c, which was:
* Copyright (C) 2001 Richard Zidlicky <rz@linux-m68k.org>
*
* Also based on offb.c, which was:
* Copyright (C) 1997 Geert Uytterhoeven
* Copyright (C) 1996 Paul Mackerras
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*/
#include <linux/errno.h>
#include <linux/fb.h>
#include <linux/io.h>
#include <linux/module.h>
#include <linux/platform_device.h>
static struct fb_fix_screeninfo simplefb_fix = {
.id = "simple",
.type = FB_TYPE_PACKED_PIXELS,
.visual = FB_VISUAL_TRUECOLOR,
.accel = FB_ACCEL_NONE,
};
static struct fb_var_screeninfo simplefb_var = {
.height = -1,
.width = -1,
.activate = FB_ACTIVATE_NOW,
.vmode = FB_VMODE_NONINTERLACED,
};
static int simplefb_setcolreg(u_int regno, u_int red, u_int green, u_int blue,
u_int transp, struct fb_info *info)
{
u32 *pal = info->pseudo_palette;
u32 cr = red >> (16 - info->var.red.length);
u32 cg = green >> (16 - info->var.green.length);
u32 cb = blue >> (16 - info->var.blue.length);
u32 value;
if (regno >= 16)
return -EINVAL;
value = (cr << info->var.red.offset) |
(cg << info->var.green.offset) |
(cb << info->var.blue.offset);
if (info->var.transp.length > 0) {
u32 mask = (1 << info->var.transp.length) - 1;
mask <<= info->var.transp.offset;
value |= mask;
}
pal[regno] = value;
return 0;
}
static struct fb_ops simplefb_ops = {
.owner = THIS_MODULE,
.fb_setcolreg = simplefb_setcolreg,
.fb_fillrect = cfb_fillrect,
.fb_copyarea = cfb_copyarea,
.fb_imageblit = cfb_imageblit,
};
struct simplefb_format {
const char *name;
u32 bits_per_pixel;
struct fb_bitfield red;
struct fb_bitfield green;
struct fb_bitfield blue;
struct fb_bitfield transp;
};
static struct simplefb_format simplefb_formats[] = {
{ "r5g6b5", 16, {11, 5}, {5, 6}, {0, 5}, {0, 0} },
};
struct simplefb_params {
u32 width;
u32 height;
u32 stride;
struct simplefb_format *format;
};
static int simplefb_parse_dt(struct platform_device *pdev,
struct simplefb_params *params)
{
struct device_node *np = pdev->dev.of_node;
int ret;
const char *format;
int i;
ret = of_property_read_u32(np, "width", &params->width);
if (ret) {
dev_err(&pdev->dev, "Can't parse width property\n");
return ret;
}
ret = of_property_read_u32(np, "height", &params->height);
if (ret) {
dev_err(&pdev->dev, "Can't parse height property\n");
return ret;
}
ret = of_property_read_u32(np, "stride", &params->stride);
if (ret) {
dev_err(&pdev->dev, "Can't parse stride property\n");
return ret;
}
ret = of_property_read_string(np, "format", &format);
if (ret) {
dev_err(&pdev->dev, "Can't parse format property\n");
return ret;
}
params->format = NULL;
for (i = 0; i < ARRAY_SIZE(simplefb_formats); i++) {
if (strcmp(format, simplefb_formats[i].name))
continue;
params->format = &simplefb_formats[i];
break;
}
if (!params->format) {
dev_err(&pdev->dev, "Invalid format value\n");
return -EINVAL;
}
return 0;
}
static int simplefb_probe(struct platform_device *pdev)
{
int ret;
struct simplefb_params params;
struct fb_info *info;
struct resource *mem;
if (fb_get_options("simplefb", NULL))
return -ENODEV;
ret = simplefb_parse_dt(pdev, &params);
if (ret)
return ret;
mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (!mem) {
dev_err(&pdev->dev, "No memory resource\n");
return -EINVAL;
}
info = framebuffer_alloc(sizeof(u32) * 16, &pdev->dev);
if (!info)
return -ENOMEM;
platform_set_drvdata(pdev, info);
info->fix = simplefb_fix;
info->fix.smem_start = mem->start;
info->fix.smem_len = resource_size(mem);
info->fix.line_length = params.stride;
info->var = simplefb_var;
info->var.xres = params.width;
info->var.yres = params.height;
info->var.xres_virtual = params.width;
info->var.yres_virtual = params.height;
info->var.bits_per_pixel = params.format->bits_per_pixel;
info->var.red = params.format->red;
info->var.green = params.format->green;
info->var.blue = params.format->blue;
info->var.transp = params.format->transp;
info->fbops = &simplefb_ops;
info->flags = FBINFO_DEFAULT;
info->screen_base = devm_ioremap(&pdev->dev, info->fix.smem_start,
info->fix.smem_len);
if (!info->screen_base) {
framebuffer_release(info);
return -ENODEV;
}
info->pseudo_palette = (void *)(info + 1);
ret = register_framebuffer(info);
if (ret < 0) {
dev_err(&pdev->dev, "Unable to register simplefb: %d\n", ret);
framebuffer_release(info);
return ret;
}
dev_info(&pdev->dev, "fb%d: simplefb registered!\n", info->node);
return 0;
}
static int simplefb_remove(struct platform_device *pdev)
{
struct fb_info *info = platform_get_drvdata(pdev);
unregister_framebuffer(info);
framebuffer_release(info);
return 0;
}
static const struct of_device_id simplefb_of_match[] = {
{ .compatible = "simple-framebuffer", },
{ },
};
MODULE_DEVICE_TABLE(of, simplefb_of_match);
static struct platform_driver simplefb_driver = {
.driver = {
.name = "simple-framebuffer",
.owner = THIS_MODULE,
.of_match_table = simplefb_of_match,
},
.probe = simplefb_probe,
.remove = simplefb_remove,
};
module_platform_driver(simplefb_driver);
MODULE_AUTHOR("Stephen Warren <swarren@wwwdotorg.org>");
MODULE_DESCRIPTION("Simple framebuffer driver");
MODULE_LICENSE("GPL v2");

View File

@ -307,7 +307,9 @@ static void free_ioctx(struct kioctx *ctx)
kunmap_atomic(ring);
while (atomic_read(&ctx->reqs_active) > 0) {
wait_event(ctx->wait, head != ctx->tail);
wait_event(ctx->wait,
head != ctx->tail ||
atomic_read(&ctx->reqs_active) <= 0);
avail = (head <= ctx->tail ? ctx->tail : ctx->nr_events) - head;
@ -1299,8 +1301,7 @@ SYSCALL_DEFINE3(io_cancel, aio_context_t, ctx_id, struct iocb __user *, iocb,
* < min_nr if the timeout specified by timeout has elapsed
* before sufficient events are available, where timeout == NULL
* specifies an infinite timeout. Note that the timeout pointed to by
* timeout is relative and will be updated if not NULL and the
* operation blocks. Will fail with -ENOSYS if not implemented.
* timeout is relative. Will fail with -ENOSYS if not implemented.
*/
SYSCALL_DEFINE5(io_getevents, aio_context_t, ctx_id,
long, min_nr,

View File

@ -1229,6 +1229,19 @@ static int fat_read_root(struct inode *inode)
return 0;
}
static unsigned long calc_fat_clusters(struct super_block *sb)
{
struct msdos_sb_info *sbi = MSDOS_SB(sb);
/* Divide first to avoid overflow */
if (sbi->fat_bits != 12) {
unsigned long ent_per_sec = sb->s_blocksize * 8 / sbi->fat_bits;
return ent_per_sec * sbi->fat_length;
}
return sbi->fat_length * sb->s_blocksize * 8 / sbi->fat_bits;
}
/*
* Read the super block of an MS-DOS FS.
*/
@ -1434,7 +1447,7 @@ int fat_fill_super(struct super_block *sb, void *data, int silent, int isvfat,
sbi->dirty = b->fat16.state & FAT_STATE_DIRTY;
/* check that FAT table does not overflow */
fat_clusters = sbi->fat_length * sb->s_blocksize * 8 / sbi->fat_bits;
fat_clusters = calc_fat_clusters(sb);
total_clusters = min(total_clusters, fat_clusters - FAT_START_ENT);
if (total_clusters > MAX_FAT(sb)) {
if (!silent)

View File

@ -415,7 +415,11 @@ struct hfs_bnode *hfs_bnode_create(struct hfs_btree *tree, u32 num)
spin_lock(&tree->hash_lock);
node = hfs_bnode_findhash(tree, num);
spin_unlock(&tree->hash_lock);
BUG_ON(node);
if (node) {
pr_crit("new node %u already hashed?\n", num);
WARN_ON(1);
return node;
}
node = __hfs_bnode_create(tree, num);
if (!node)
return ERR_PTR(-ENOMEM);

View File

@ -219,13 +219,32 @@ static int nilfs_writepage(struct page *page, struct writeback_control *wbc)
static int nilfs_set_page_dirty(struct page *page)
{
int ret = __set_page_dirty_buffers(page);
int ret = __set_page_dirty_nobuffers(page);
if (ret) {
if (page_has_buffers(page)) {
struct inode *inode = page->mapping->host;
unsigned nr_dirty = 1 << (PAGE_SHIFT - inode->i_blkbits);
unsigned nr_dirty = 0;
struct buffer_head *bh, *head;
nilfs_set_file_dirty(inode, nr_dirty);
/*
* This page is locked by callers, and no other thread
* concurrently marks its buffers dirty since they are
* only dirtied through routines in fs/buffer.c in
* which call sites of mark_buffer_dirty are protected
* by page lock.
*/
bh = head = page_buffers(page);
do {
/* Do not mark hole blocks dirty */
if (buffer_dirty(bh) || !buffer_mapped(bh))
continue;
set_buffer_dirty(bh);
nr_dirty++;
} while (bh = bh->b_this_page, bh != head);
if (nr_dirty)
nilfs_set_file_dirty(inode, nr_dirty);
}
return ret;
}

View File

@ -790,7 +790,7 @@ int ocfs2_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
&hole_size, &rec, &is_last);
if (ret) {
mlog_errno(ret);
goto out;
goto out_unlock;
}
if (rec.e_blkno == 0ULL) {

View File

@ -2288,7 +2288,7 @@ relock:
ret = ocfs2_inode_lock(inode, NULL, 1);
if (ret < 0) {
mlog_errno(ret);
goto out_sems;
goto out;
}
ocfs2_inode_unlock(inode, 1);

View File

@ -562,6 +562,9 @@ int __trace_bprintk(unsigned long ip, const char *fmt, ...);
extern __printf(2, 3)
int __trace_printk(unsigned long ip, const char *fmt, ...);
extern int __trace_bputs(unsigned long ip, const char *str);
extern int __trace_puts(unsigned long ip, const char *str, int size);
/**
* trace_puts - write a string into the ftrace buffer
* @str: the string to record
@ -587,8 +590,6 @@ int __trace_printk(unsigned long ip, const char *fmt, ...);
* (1 when __trace_bputs is used, strlen(str) when __trace_puts is used)
*/
extern int __trace_bputs(unsigned long ip, const char *str);
extern int __trace_puts(unsigned long ip, const char *str, int size);
#define trace_puts(str) ({ \
static const char *trace_printk_fmt \
__attribute__((section("__trace_printk_fmt"))) = \

View File

@ -83,7 +83,6 @@
extern struct bus_type rio_bus_type;
extern struct device rio_bus;
extern struct list_head rio_devices; /* list of all devices */
struct rio_mport;
struct rio_dev;
@ -237,6 +236,7 @@ enum rio_phy_type {
* @name: Port name string
* @priv: Master port private data
* @dma: DMA device associated with mport
* @nscan: RapidIO network enumeration/discovery operations
*/
struct rio_mport {
struct list_head dbells; /* list of doorbell events */
@ -262,8 +262,14 @@ struct rio_mport {
#ifdef CONFIG_RAPIDIO_DMA_ENGINE
struct dma_device dma;
#endif
struct rio_scan *nscan;
};
/*
* Enumeration/discovery control flags
*/
#define RIO_SCAN_ENUM_NO_WAIT 0x00000001 /* Do not wait for enum completed */
struct rio_id_table {
u16 start; /* logical minimal id */
u32 max; /* max number of IDs in table */
@ -460,6 +466,16 @@ static inline struct rio_mport *dma_to_mport(struct dma_device *ddev)
}
#endif /* CONFIG_RAPIDIO_DMA_ENGINE */
/**
* struct rio_scan - RIO enumeration and discovery operations
* @enumerate: Callback to perform RapidIO fabric enumeration.
* @discover: Callback to perform RapidIO fabric discovery.
*/
struct rio_scan {
int (*enumerate)(struct rio_mport *mport, u32 flags);
int (*discover)(struct rio_mport *mport, u32 flags);
};
/* Architecture and hardware-specific functions */
extern int rio_register_mport(struct rio_mport *);
extern int rio_open_inb_mbox(struct rio_mport *, void *, int, int);

View File

@ -433,5 +433,6 @@ extern u16 rio_local_get_device_id(struct rio_mport *port);
extern struct rio_dev *rio_get_device(u16 vid, u16 did, struct rio_dev *from);
extern struct rio_dev *rio_get_asm(u16 vid, u16 did, u16 asm_vid, u16 asm_did,
struct rio_dev *from);
extern int rio_init_mports(void);
#endif /* LINUX_RIO_DRV_H */

View File

@ -217,6 +217,8 @@ do { \
if (!ret) \
break; \
} \
if (!ret && (condition)) \
ret = 1; \
finish_wait(&wq, &__wait); \
} while (0)
@ -233,8 +235,9 @@ do { \
* wake_up() has to be called after changing any variable that could
* change the result of the wait condition.
*
* The function returns 0 if the @timeout elapsed, and the remaining
* jiffies if the condition evaluated to true before the timeout elapsed.
* The function returns 0 if the @timeout elapsed, or the remaining
* jiffies (at least 1) if the @condition evaluated to %true before
* the @timeout elapsed.
*/
#define wait_event_timeout(wq, condition, timeout) \
({ \
@ -302,6 +305,8 @@ do { \
ret = -ERESTARTSYS; \
break; \
} \
if (!ret && (condition)) \
ret = 1; \
finish_wait(&wq, &__wait); \
} while (0)
@ -318,9 +323,10 @@ do { \
* wake_up() has to be called after changing any variable that could
* change the result of the wait condition.
*
* The function returns 0 if the @timeout elapsed, -ERESTARTSYS if it
* was interrupted by a signal, and the remaining jiffies otherwise
* if the condition evaluated to true before the timeout elapsed.
* Returns:
* 0 if the @timeout elapsed, -%ERESTARTSYS if it was interrupted by
* a signal, or the remaining jiffies (at least 1) if the @condition
* evaluated to %true before the @timeout elapsed.
*/
#define wait_event_interruptible_timeout(wq, condition, timeout) \
({ \

View File

@ -1021,9 +1021,6 @@ static void audit_log_rule_change(char *action, struct audit_krule *rule, int re
* @seq: netlink audit message sequence (serial) number
* @data: payload data
* @datasz: size of payload data
* @loginuid: loginuid of sender
* @sessionid: sessionid for netlink audit message
* @sid: SE Linux Security ID of sender
*/
int audit_receive_filter(int type, int pid, int seq, void *data, size_t datasz)
{

View File

@ -2325,7 +2325,12 @@ static void collapse_huge_page(struct mm_struct *mm,
pte_unmap(pte);
spin_lock(&mm->page_table_lock);
BUG_ON(!pmd_none(*pmd));
set_pmd_at(mm, address, pmd, _pmd);
/*
* We can only use set_pmd_at when establishing
* hugepmds and never for establishing regular pmds that
* points to regular pagetables. Use pmd_populate for that
*/
pmd_populate(mm, pmd, pmd_pgtable(_pmd));
spin_unlock(&mm->page_table_lock);
anon_vma_unlock_write(vma->anon_vma);
goto out;

View File

@ -4108,8 +4108,6 @@ __mem_cgroup_uncharge_common(struct page *page, enum charge_type ctype,
if (mem_cgroup_disabled())
return NULL;
VM_BUG_ON(PageSwapCache(page));
if (PageTransHuge(page)) {
nr_pages <<= compound_order(page);
VM_BUG_ON(!PageTransHuge(page));
@ -4205,6 +4203,18 @@ void mem_cgroup_uncharge_page(struct page *page)
if (page_mapped(page))
return;
VM_BUG_ON(page->mapping && !PageAnon(page));
/*
* If the page is in swap cache, uncharge should be deferred
* to the swap path, which also properly accounts swap usage
* and handles memcg lifetime.
*
* Note that this check is not stable and reclaim may add the
* page to swap cache at any time after this. However, if the
* page is not in swap cache by the time page->mapcount hits
* 0, there won't be any page table references to the swap
* slot, and reclaim will free it and not actually write the
* page to disk.
*/
if (PageSwapCache(page))
return;
__mem_cgroup_uncharge_common(page, MEM_CGROUP_CHARGE_TYPE_ANON, false);

View File

@ -720,9 +720,12 @@ int __remove_pages(struct zone *zone, unsigned long phys_start_pfn,
start = phys_start_pfn << PAGE_SHIFT;
size = nr_pages * PAGE_SIZE;
ret = release_mem_region_adjustable(&iomem_resource, start, size);
if (ret)
pr_warn("Unable to release resource <%016llx-%016llx> (%d)\n",
start, start + size - 1, ret);
if (ret) {
resource_size_t endres = start + size - 1;
pr_warn("Unable to release resource <%pa-%pa> (%d)\n",
&start, &endres, ret);
}
sections_to_remove = nr_pages / PAGES_PER_SECTION;
for (i = 0; i < sections_to_remove; i++) {

View File

@ -165,7 +165,7 @@ static int remove_migration_pte(struct page *new, struct vm_area_struct *vma,
pte = arch_make_huge_pte(pte, vma, new, 0);
}
#endif
flush_cache_page(vma, addr, pte_pfn(pte));
flush_dcache_page(new);
set_pte_at(mm, addr, ptep, pte);
if (PageHuge(new)) {

View File

@ -40,48 +40,44 @@ void __mmu_notifier_release(struct mm_struct *mm)
int id;
/*
* srcu_read_lock() here will block synchronize_srcu() in
* mmu_notifier_unregister() until all registered
* ->release() callouts this function makes have
* returned.
* SRCU here will block mmu_notifier_unregister until
* ->release returns.
*/
id = srcu_read_lock(&srcu);
hlist_for_each_entry_rcu(mn, &mm->mmu_notifier_mm->list, hlist)
/*
* If ->release runs before mmu_notifier_unregister it must be
* handled, as it's the only way for the driver to flush all
* existing sptes and stop the driver from establishing any more
* sptes before all the pages in the mm are freed.
*/
if (mn->ops->release)
mn->ops->release(mn, mm);
srcu_read_unlock(&srcu, id);
spin_lock(&mm->mmu_notifier_mm->lock);
while (unlikely(!hlist_empty(&mm->mmu_notifier_mm->list))) {
mn = hlist_entry(mm->mmu_notifier_mm->list.first,
struct mmu_notifier,
hlist);
/*
* Unlink. This will prevent mmu_notifier_unregister()
* from also making the ->release() callout.
* We arrived before mmu_notifier_unregister so
* mmu_notifier_unregister will do nothing other than to wait
* for ->release to finish and for mmu_notifier_unregister to
* return.
*/
hlist_del_init_rcu(&mn->hlist);
spin_unlock(&mm->mmu_notifier_mm->lock);
/*
* Clear sptes. (see 'release' description in mmu_notifier.h)
*/
if (mn->ops->release)
mn->ops->release(mn, mm);
spin_lock(&mm->mmu_notifier_mm->lock);
}
spin_unlock(&mm->mmu_notifier_mm->lock);
/*
* All callouts to ->release() which we have done are complete.
* Allow synchronize_srcu() in mmu_notifier_unregister() to complete
*/
srcu_read_unlock(&srcu, id);
/*
* mmu_notifier_unregister() may have unlinked a notifier and may
* still be calling out to it. Additionally, other notifiers
* may have been active via vmtruncate() et. al. Block here
* to ensure that all notifier callouts for this mm have been
* completed and the sptes are really cleaned up before returning
* to exit_mmap().
* synchronize_srcu here prevents mmu_notifier_release from returning to
* exit_mmap (which would proceed with freeing all pages in the mm)
* until the ->release method returns, if it was invoked by
* mmu_notifier_unregister.
*
* The mmu_notifier_mm can't go away from under us because one mm_count
* is held by exit_mmap.
*/
synchronize_srcu(&srcu);
}
@ -292,31 +288,34 @@ void mmu_notifier_unregister(struct mmu_notifier *mn, struct mm_struct *mm)
{
BUG_ON(atomic_read(&mm->mm_count) <= 0);
spin_lock(&mm->mmu_notifier_mm->lock);
if (!hlist_unhashed(&mn->hlist)) {
/*
* SRCU here will force exit_mmap to wait for ->release to
* finish before freeing the pages.
*/
int id;
/*
* Ensure we synchronize up with __mmu_notifier_release().
*/
id = srcu_read_lock(&srcu);
hlist_del_rcu(&mn->hlist);
spin_unlock(&mm->mmu_notifier_mm->lock);
/*
* exit_mmap will block in mmu_notifier_release to guarantee
* that ->release is called before freeing the pages.
*/
if (mn->ops->release)
mn->ops->release(mn, mm);
/*
* Allow __mmu_notifier_release() to complete.
*/
srcu_read_unlock(&srcu, id);
} else
spin_lock(&mm->mmu_notifier_mm->lock);
/*
* Can not use list_del_rcu() since __mmu_notifier_release
* can delete it before we hold the lock.
*/
hlist_del_init_rcu(&mn->hlist);
spin_unlock(&mm->mmu_notifier_mm->lock);
}
/*
* Wait for any running method to finish, including ->release() if it
* was run by __mmu_notifier_release() instead of us.
* Wait for any running method to finish, of course including
* ->release if it was run by mmu_notifier_relase instead of us.
*/
synchronize_srcu(&srcu);

View File

@ -127,28 +127,7 @@ static int walk_hugetlb_range(struct vm_area_struct *vma,
return 0;
}
static struct vm_area_struct* hugetlb_vma(unsigned long addr, struct mm_walk *walk)
{
struct vm_area_struct *vma;
/* We don't need vma lookup at all. */
if (!walk->hugetlb_entry)
return NULL;
VM_BUG_ON(!rwsem_is_locked(&walk->mm->mmap_sem));
vma = find_vma(walk->mm, addr);
if (vma && vma->vm_start <= addr && is_vm_hugetlb_page(vma))
return vma;
return NULL;
}
#else /* CONFIG_HUGETLB_PAGE */
static struct vm_area_struct* hugetlb_vma(unsigned long addr, struct mm_walk *walk)
{
return NULL;
}
static int walk_hugetlb_range(struct vm_area_struct *vma,
unsigned long addr, unsigned long end,
struct mm_walk *walk)
@ -198,30 +177,53 @@ int walk_page_range(unsigned long addr, unsigned long end,
if (!walk->mm)
return -EINVAL;
VM_BUG_ON(!rwsem_is_locked(&walk->mm->mmap_sem));
pgd = pgd_offset(walk->mm, addr);
do {
struct vm_area_struct *vma;
struct vm_area_struct *vma = NULL;
next = pgd_addr_end(addr, end);
/*
* handle hugetlb vma individually because pagetable walk for
* the hugetlb page is dependent on the architecture and
* we can't handled it in the same manner as non-huge pages.
* This function was not intended to be vma based.
* But there are vma special cases to be handled:
* - hugetlb vma's
* - VM_PFNMAP vma's
*/
vma = hugetlb_vma(addr, walk);
vma = find_vma(walk->mm, addr);
if (vma) {
if (vma->vm_end < next)
next = vma->vm_end;
/*
* Hugepage is very tightly coupled with vma, so
* walk through hugetlb entries within a given vma.
* There are no page structures backing a VM_PFNMAP
* range, so do not allow split_huge_page_pmd().
*/
err = walk_hugetlb_range(vma, addr, next, walk);
if (err)
break;
pgd = pgd_offset(walk->mm, next);
continue;
if ((vma->vm_start <= addr) &&
(vma->vm_flags & VM_PFNMAP)) {
next = vma->vm_end;
pgd = pgd_offset(walk->mm, next);
continue;
}
/*
* Handle hugetlb vma individually because pagetable
* walk for the hugetlb page is dependent on the
* architecture and we can't handled it in the same
* manner as non-huge pages.
*/
if (walk->hugetlb_entry && (vma->vm_start <= addr) &&
is_vm_hugetlb_page(vma)) {
if (vma->vm_end < next)
next = vma->vm_end;
/*
* Hugepage is very tightly coupled with vma,
* so walk through hugetlb entries within a
* given vma.
*/
err = walk_hugetlb_range(vma, addr, next, walk);
if (err)
break;
pgd = pgd_offset(walk->mm, next);
continue;
}
}
if (pgd_none_or_clear_bad(pgd)) {

View File

@ -6,7 +6,6 @@ TARGETS += memory-hotplug
TARGETS += mqueue
TARGETS += net
TARGETS += ptrace
TARGETS += soft-dirty
TARGETS += vm
all:

View File

@ -1,10 +0,0 @@
CFLAGS += -iquote../../../../include/uapi -Wall
soft-dirty: soft-dirty.c
all: soft-dirty
clean:
rm -f soft-dirty
run_tests: all
@./soft-dirty || echo "soft-dirty selftests: [FAIL]"

View File

@ -1,114 +0,0 @@
#include <stdlib.h>
#include <stdio.h>
#include <sys/mman.h>
#include <unistd.h>
#include <fcntl.h>
#include <sys/types.h>
typedef unsigned long long u64;
#define PME_PRESENT (1ULL << 63)
#define PME_SOFT_DIRTY (1Ull << 55)
#define PAGES_TO_TEST 3
#ifndef PAGE_SIZE
#define PAGE_SIZE 4096
#endif
static void get_pagemap2(char *mem, u64 *map)
{
int fd;
fd = open("/proc/self/pagemap2", O_RDONLY);
if (fd < 0) {
perror("Can't open pagemap2");
exit(1);
}
lseek(fd, (unsigned long)mem / PAGE_SIZE * sizeof(u64), SEEK_SET);
read(fd, map, sizeof(u64) * PAGES_TO_TEST);
close(fd);
}
static inline char map_p(u64 map)
{
return map & PME_PRESENT ? 'p' : '-';
}
static inline char map_sd(u64 map)
{
return map & PME_SOFT_DIRTY ? 'd' : '-';
}
static int check_pte(int step, int page, u64 *map, u64 want)
{
if ((map[page] & want) != want) {
printf("Step %d Page %d has %c%c, want %c%c\n",
step, page,
map_p(map[page]), map_sd(map[page]),
map_p(want), map_sd(want));
return 1;
}
return 0;
}
static void clear_refs(void)
{
int fd;
char *v = "4";
fd = open("/proc/self/clear_refs", O_WRONLY);
if (write(fd, v, 3) < 3) {
perror("Can't clear soft-dirty bit");
exit(1);
}
close(fd);
}
int main(void)
{
char *mem, x;
u64 map[PAGES_TO_TEST];
mem = mmap(NULL, PAGES_TO_TEST * PAGE_SIZE,
PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANON, 0, 0);
x = mem[0];
mem[2 * PAGE_SIZE] = 'c';
get_pagemap2(mem, map);
if (check_pte(1, 0, map, PME_PRESENT))
return 1;
if (check_pte(1, 1, map, 0))
return 1;
if (check_pte(1, 2, map, PME_PRESENT | PME_SOFT_DIRTY))
return 1;
clear_refs();
get_pagemap2(mem, map);
if (check_pte(2, 0, map, PME_PRESENT))
return 1;
if (check_pte(2, 1, map, 0))
return 1;
if (check_pte(2, 2, map, PME_PRESENT))
return 1;
mem[0] = 'a';
mem[PAGE_SIZE] = 'b';
x = mem[2 * PAGE_SIZE];
get_pagemap2(mem, map);
if (check_pte(3, 0, map, PME_PRESENT | PME_SOFT_DIRTY))
return 1;
if (check_pte(3, 1, map, PME_PRESENT | PME_SOFT_DIRTY))
return 1;
if (check_pte(3, 2, map, PME_PRESENT))
return 1;
(void)x; /* gcc warn */
printf("PASS\n");
return 0;
}