Char/Misc driver patches for 4.20-rc1

Here is the big set of char/misc patches for 4.20-rc1.
 
 Loads of things here, we have new code in all of these driver
 subsystems:
 	fpga
 	stm
 	extcon
 	nvmem
 	eeprom
 	hyper-v
 	gsmi
 	coresight
 	thunderbolt
 	vmw_balloon
 	goldfish
 	soundwire
 
 along with lots of fixes and minor changes to other small drivers.
 
 All of these have been in linux-next for a while with no reported
 issues.
 
 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
 -----BEGIN PGP SIGNATURE-----
 
 iG0EABECAC0WIQT0tgzFv3jCIUoxPcsxR9QN2y37KQUCW9Le5A8cZ3JlZ0Brcm9h
 aC5jb20ACgkQMUfUDdst+yn+BQCfZ6DtCIgqo0UW3dLV8Fd0wya9kw0AoNglzJJ6
 YRZiaSdRiggARpNdh3ME
 =97BX
 -----END PGP SIGNATURE-----

Merge tag 'char-misc-4.20-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc

Pull char/misc driver updates from Greg KH:
 "Here is the big set of char/misc patches for 4.20-rc1.

  Loads of things here, we have new code in all of these driver
  subsystems:
   - fpga
   - stm
   - extcon
   - nvmem
   - eeprom
   - hyper-v
   - gsmi
   - coresight
   - thunderbolt
   - vmw_balloon
   - goldfish
   - soundwire
  along with lots of fixes and minor changes to other small drivers.

  All of these have been in linux-next for a while with no reported
  issues"

* tag 'char-misc-4.20-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc: (245 commits)
  Documentation/security-bugs: Clarify treatment of embargoed information
  lib: Fix ia64 bootloader linkage
  MAINTAINERS: Clarify UIO vs UIOVEC maintainer
  docs/uio: fix a grammar nitpick
  docs: fpga: document programming fpgas using regions
  fpga: add devm_fpga_region_create
  fpga: bridge: add devm_fpga_bridge_create
  fpga: mgr: add devm_fpga_mgr_create
  hv_balloon: Replace spin_is_locked() with lockdep
  sgi-xp: Replace spin_is_locked() with lockdep
  eeprom: New ee1004 driver for DDR4 memory
  eeprom: at25: remove unneeded 'at25_remove'
  w1: IAD Register is yet readable trough iad sys file. Fix snprintf (%u for unsigned, count for max size).
  misc: mic: scif: remove set but not used variables 'src_dma_addr, dst_dma_addr'
  misc: mic: fix a DMA pool free failure
  platform: goldfish: pipe: Add a blank line to separate varibles and code
  platform: goldfish: pipe: Remove redundant casting
  platform: goldfish: pipe: Call misc_deregister if init fails
  platform: goldfish: pipe: Move the file-scope goldfish_pipe_dev variable into the driver state
  platform: goldfish: pipe: Move the file-scope goldfish_pipe_miscdev variable into the driver state
  ...
This commit is contained in:
Linus Torvalds 2018-10-26 09:11:43 -07:00
commit 18d0eae30e
180 changed files with 6928 additions and 3401 deletions

View File

@ -0,0 +1,41 @@
What: /config/stp-policy/<device>:p_sys-t.<policy>/<node>/uuid
Date: June 2018
KernelVersion: 4.19
Description:
UUID source identifier string, RW.
Default value is randomly generated at the mkdir <node> time.
Data coming from trace sources that use this <node> will be
tagged with this UUID in the MIPI SyS-T packet stream, to
allow the decoder to discern between different sources
within the same master/channel range, and identify the
higher level decoders that may be needed for each source.
What: /config/stp-policy/<device>:p_sys-t.<policy>/<node>/do_len
Date: June 2018
KernelVersion: 4.19
Description:
Include payload length in the MIPI SyS-T header, boolean.
If enabled, the SyS-T protocol encoder will include payload
length in each packet's metadata. This is normally redundant
if the underlying transport protocol supports marking message
boundaries (which STP does), so this is off by default.
What: /config/stp-policy/<device>:p_sys-t.<policy>/<node>/ts_interval
Date: June 2018
KernelVersion: 4.19
Description:
Time interval in milliseconds. Include a timestamp in the
MIPI SyS-T packet metadata, if this many milliseconds have
passed since the previous packet from this source. Zero is
the default and stands for "never send the timestamp".
What: /config/stp-policy/<device>:p_sys-t.<policy>/<node>/clocksync_interval
Date: June 2018
KernelVersion: 4.19
Description:
Time interval in milliseconds. Send a CLOCKSYNC packet if
this many milliseconds have passed since the previous
CLOCKSYNC packet from this source. Zero is the default and
stands for "never send the CLOCKSYNC". It makes sense to
use this option with sources that generate constant and/or
periodic data, like stm_heartbeat.

View File

@ -0,0 +1,21 @@
What: /sys/bus/vmbus/devices/.../driver_override
Date: August 2019
Contact: Stephen Hemminger <sthemmin@microsoft.com>
Description:
This file allows the driver for a device to be specified which
will override standard static and dynamic ID matching. When
specified, only a driver with a name matching the value written
to driver_override will have an opportunity to bind to the
device. The override is specified by writing a string to the
driver_override file (echo uio_hv_generic > driver_override) and
may be cleared with an empty string (echo > driver_override).
This returns the device to standard matching rules binding.
Writing to driver_override does not automatically unbind the
device from its current driver or make any attempt to
automatically load the specified driver. If no driver with a
matching name is currently loaded in the kernel, the device
will not bind to any driver. This also allows devices to
opt-out of driver binding using a driver_override name such as
"none". Only a single driver may be specified in the override,
there is no support for parsing delimiters.

View File

@ -26,23 +26,34 @@ information is helpful. Any exploit code is very helpful and will not
be released without consent from the reporter unless it has already been
made public.
Disclosure
----------
Disclosure and embargoed information
------------------------------------
The goal of the Linux kernel security team is to work with the bug
submitter to understand and fix the bug. We prefer to publish the fix as
soon as possible, but try to avoid public discussion of the bug itself
and leave that to others.
The security list is not a disclosure channel. For that, see Coordination
below.
Publishing the fix may be delayed when the bug or the fix is not yet
fully understood, the solution is not well-tested or for vendor
coordination. However, we expect these delays to be short, measurable in
days, not weeks or months. A release date is negotiated by the security
team working with the bug submitter as well as vendors. However, the
kernel security team holds the final say when setting a timeframe. The
timeframe varies from immediate (esp. if it's already publicly known bug)
to a few weeks. As a basic default policy, we expect report date to
release date to be on the order of 7 days.
Once a robust fix has been developed, our preference is to release the
fix in a timely fashion, treating it no differently than any of the other
thousands of changes and fixes the Linux kernel project releases every
month.
However, at the request of the reporter, we will postpone releasing the
fix for up to 5 business days after the date of the report or after the
embargo has lifted; whichever comes first. The only exception to that
rule is if the bug is publicly known, in which case the preference is to
release the fix as soon as it's available.
Whilst embargoed information may be shared with trusted individuals in
order to develop a fix, such information will not be published alongside
the fix or on any other disclosure channel without the permission of the
reporter. This includes but is not limited to the original bug report
and followup discussions (if any), exploits, CVE information or the
identity of the reporter.
In other words our only interest is in getting bugs fixed. All other
information submitted to the security list and any followup discussions
of the report are treated confidentially even after the embargo has been
lifted, in perpetuity.
Coordination
------------
@ -68,7 +79,7 @@ may delay the bug handling. If a reporter wishes to have a CVE identifier
assigned ahead of public disclosure, they will need to contact the private
linux-distros list, described above. When such a CVE identifier is known
before a patch is provided, it is desirable to mention it in the commit
message, though.
message if the reporter agrees.
Non-disclosure agreements
-------------------------

View File

@ -54,9 +54,7 @@ its hardware characteristcs.
clocks the core of that coresight component. The latter clock
is optional.
* port or ports: The representation of the component's port
layout using the generic DT graph presentation found in
"bindings/graph.txt".
* port or ports: see "Graph bindings for Coresight" below.
* Additional required properties for System Trace Macrocells (STM):
* reg: along with the physical base address and length of the register
@ -73,7 +71,7 @@ its hardware characteristcs.
AMBA markee):
- "arm,coresight-replicator"
* port or ports: same as above.
* port or ports: see "Graph bindings for Coresight" below.
* Optional properties for ETM/PTMs:
@ -96,6 +94,20 @@ its hardware characteristcs.
* interrupts : Exactly one SPI may be listed for reporting the address
error
Graph bindings for Coresight
-------------------------------
Coresight components are interconnected to create a data path for the flow of
trace data generated from the "sources" to their collection points "sink".
Each coresight component must describe the "input" and "output" connections.
The connections must be described via generic DT graph bindings as described
by the "bindings/graph.txt", where each "port" along with an "endpoint"
component represents a hardware port and the connection.
* All output ports must be listed inside a child node named "out-ports"
* All input ports must be listed inside a child node named "in-ports".
* Port address must match the hardware port number.
Example:
1. Sinks
@ -105,10 +117,11 @@ Example:
clocks = <&oscclk6a>;
clock-names = "apb_pclk";
port {
etb_in_port: endpoint@0 {
slave-mode;
remote-endpoint = <&replicator_out_port0>;
in-ports {
port {
etb_in_port: endpoint@0 {
remote-endpoint = <&replicator_out_port0>;
};
};
};
};
@ -119,10 +132,11 @@ Example:
clocks = <&oscclk6a>;
clock-names = "apb_pclk";
port {
tpiu_in_port: endpoint@0 {
slave-mode;
remote-endpoint = <&replicator_out_port1>;
in-ports {
port {
tpiu_in_port: endpoint@0 {
remote-endpoint = <&replicator_out_port1>;
};
};
};
};
@ -133,22 +147,16 @@ Example:
clocks = <&oscclk6a>;
clock-names = "apb_pclk";
ports {
#address-cells = <1>;
#size-cells = <0>;
/* input port */
port@0 {
reg = <0>;
in-ports {
port {
etr_in_port: endpoint {
slave-mode;
remote-endpoint = <&replicator2_out_port0>;
};
};
};
/* CATU link represented by output port */
port@1 {
reg = <1>;
out-ports {
port {
etr_out_port: endpoint {
remote-endpoint = <&catu_in_port>;
};
@ -163,7 +171,7 @@ Example:
*/
compatible = "arm,coresight-replicator";
ports {
out-ports {
#address-cells = <1>;
#size-cells = <0>;
@ -181,12 +189,11 @@ Example:
remote-endpoint = <&tpiu_in_port>;
};
};
};
/* replicator input port */
port@2 {
reg = <0>;
in-ports {
port {
replicator_in_port0: endpoint {
slave-mode;
remote-endpoint = <&funnel_out_port0>;
};
};
@ -199,40 +206,36 @@ Example:
clocks = <&oscclk6a>;
clock-names = "apb_pclk";
ports {
#address-cells = <1>;
#size-cells = <0>;
/* funnel output port */
port@0 {
reg = <0>;
out-ports {
port {
funnel_out_port0: endpoint {
remote-endpoint =
<&replicator_in_port0>;
};
};
};
/* funnel input ports */
port@1 {
in-ports {
#address-cells = <1>;
#size-cells = <0>;
port@0 {
reg = <0>;
funnel_in_port0: endpoint {
slave-mode;
remote-endpoint = <&ptm0_out_port>;
};
};
port@2 {
port@1 {
reg = <1>;
funnel_in_port1: endpoint {
slave-mode;
remote-endpoint = <&ptm1_out_port>;
};
};
port@3 {
port@2 {
reg = <2>;
funnel_in_port2: endpoint {
slave-mode;
remote-endpoint = <&etm0_out_port>;
};
};
@ -248,9 +251,11 @@ Example:
cpu = <&cpu0>;
clocks = <&oscclk6a>;
clock-names = "apb_pclk";
port {
ptm0_out_port: endpoint {
remote-endpoint = <&funnel_in_port0>;
out-ports {
port {
ptm0_out_port: endpoint {
remote-endpoint = <&funnel_in_port0>;
};
};
};
};
@ -262,9 +267,11 @@ Example:
cpu = <&cpu1>;
clocks = <&oscclk6a>;
clock-names = "apb_pclk";
port {
ptm1_out_port: endpoint {
remote-endpoint = <&funnel_in_port1>;
out-ports {
port {
ptm1_out_port: endpoint {
remote-endpoint = <&funnel_in_port1>;
};
};
};
};
@ -278,9 +285,11 @@ Example:
clocks = <&soc_smc50mhz>;
clock-names = "apb_pclk";
port {
stm_out_port: endpoint {
remote-endpoint = <&main_funnel_in_port2>;
out-ports {
port {
stm_out_port: endpoint {
remote-endpoint = <&main_funnel_in_port2>;
};
};
};
};
@ -295,10 +304,11 @@ Example:
clock-names = "apb_pclk";
interrupts = <GIC_SPI 4 IRQ_TYPE_LEVEL_HIGH>;
port {
catu_in_port: endpoint {
slave-mode;
remote-endpoint = <&etr_out_port>;
in-ports {
port {
catu_in_port: endpoint {
remote-endpoint = <&etr_out_port>;
};
};
};
};

View File

@ -4,6 +4,12 @@ FPGA Bridge
API to implement a new FPGA bridge
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* struct :c:type:`fpga_bridge` — The FPGA Bridge structure
* struct :c:type:`fpga_bridge_ops` — Low level Bridge driver ops
* :c:func:`devm_fpga_bridge_create()` — Allocate and init a bridge struct
* :c:func:`fpga_bridge_register()` — Register a bridge
* :c:func:`fpga_bridge_unregister()` — Unregister a bridge
.. kernel-doc:: include/linux/fpga/fpga-bridge.h
:functions: fpga_bridge
@ -11,39 +17,10 @@ API to implement a new FPGA bridge
:functions: fpga_bridge_ops
.. kernel-doc:: drivers/fpga/fpga-bridge.c
:functions: fpga_bridge_create
.. kernel-doc:: drivers/fpga/fpga-bridge.c
:functions: fpga_bridge_free
:functions: devm_fpga_bridge_create
.. kernel-doc:: drivers/fpga/fpga-bridge.c
:functions: fpga_bridge_register
.. kernel-doc:: drivers/fpga/fpga-bridge.c
:functions: fpga_bridge_unregister
API to control an FPGA bridge
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You probably won't need these directly. FPGA regions should handle this.
.. kernel-doc:: drivers/fpga/fpga-bridge.c
:functions: of_fpga_bridge_get
.. kernel-doc:: drivers/fpga/fpga-bridge.c
:functions: fpga_bridge_get
.. kernel-doc:: drivers/fpga/fpga-bridge.c
:functions: fpga_bridge_put
.. kernel-doc:: drivers/fpga/fpga-bridge.c
:functions: fpga_bridge_get_to_list
.. kernel-doc:: drivers/fpga/fpga-bridge.c
:functions: of_fpga_bridge_get_to_list
.. kernel-doc:: drivers/fpga/fpga-bridge.c
:functions: fpga_bridge_enable
.. kernel-doc:: drivers/fpga/fpga-bridge.c
:functions: fpga_bridge_disable

View File

@ -49,18 +49,14 @@ probe function calls fpga_mgr_register(), such as::
* them in priv
*/
mgr = fpga_mgr_create(dev, "Altera SOCFPGA FPGA Manager",
&socfpga_fpga_ops, priv);
mgr = devm_fpga_mgr_create(dev, "Altera SOCFPGA FPGA Manager",
&socfpga_fpga_ops, priv);
if (!mgr)
return -ENOMEM;
platform_set_drvdata(pdev, mgr);
ret = fpga_mgr_register(mgr);
if (ret)
fpga_mgr_free(mgr);
return ret;
return fpga_mgr_register(mgr);
}
static int socfpga_fpga_remove(struct platform_device *pdev)
@ -102,67 +98,19 @@ The ops include a .state function which will determine the state the FPGA is in
and return a code of type enum fpga_mgr_states. It doesn't result in a change
in state.
How to write an image buffer to a supported FPGA
------------------------------------------------
Some sample code::
#include <linux/fpga/fpga-mgr.h>
struct fpga_manager *mgr;
struct fpga_image_info *info;
int ret;
/*
* Get a reference to FPGA manager. The manager is not locked, so you can
* hold onto this reference without it preventing programming.
*
* This example uses the device node of the manager. Alternatively, use
* fpga_mgr_get(dev) instead if you have the device.
*/
mgr = of_fpga_mgr_get(mgr_node);
/* struct with information about the FPGA image to program. */
info = fpga_image_info_alloc(dev);
/* flags indicates whether to do full or partial reconfiguration */
info->flags = FPGA_MGR_PARTIAL_RECONFIG;
/*
* At this point, indicate where the image is. This is pseudo-code; you're
* going to use one of these three.
*/
if (image is in a scatter gather table) {
info->sgt = [your scatter gather table]
} else if (image is in a buffer) {
info->buf = [your image buffer]
info->count = [image buffer size]
} else if (image is in a firmware file) {
info->firmware_name = devm_kstrdup(dev, firmware_name, GFP_KERNEL);
}
/* Get exclusive control of FPGA manager */
ret = fpga_mgr_lock(mgr);
/* Load the buffer to the FPGA */
ret = fpga_mgr_buf_load(mgr, &info, buf, count);
/* Release the FPGA manager */
fpga_mgr_unlock(mgr);
fpga_mgr_put(mgr);
/* Deallocate the image info if you're done with it */
fpga_image_info_free(info);
API for implementing a new FPGA Manager driver
----------------------------------------------
* ``fpga_mgr_states`` — Values for :c:member:`fpga_manager->state`.
* struct :c:type:`fpga_manager` — the FPGA manager struct
* struct :c:type:`fpga_manager_ops` — Low level FPGA manager driver ops
* :c:func:`devm_fpga_mgr_create` — Allocate and init a manager struct
* :c:func:`fpga_mgr_register` — Register an FPGA manager
* :c:func:`fpga_mgr_unregister` — Unregister an FPGA manager
.. kernel-doc:: include/linux/fpga/fpga-mgr.h
:functions: fpga_mgr_states
.. kernel-doc:: include/linux/fpga/fpga-mgr.h
:functions: fpga_manager
@ -170,56 +118,10 @@ API for implementing a new FPGA Manager driver
:functions: fpga_manager_ops
.. kernel-doc:: drivers/fpga/fpga-mgr.c
:functions: fpga_mgr_create
.. kernel-doc:: drivers/fpga/fpga-mgr.c
:functions: fpga_mgr_free
:functions: devm_fpga_mgr_create
.. kernel-doc:: drivers/fpga/fpga-mgr.c
:functions: fpga_mgr_register
.. kernel-doc:: drivers/fpga/fpga-mgr.c
:functions: fpga_mgr_unregister
API for programming an FPGA
---------------------------
FPGA Manager flags
.. kernel-doc:: include/linux/fpga/fpga-mgr.h
:doc: FPGA Manager flags
.. kernel-doc:: include/linux/fpga/fpga-mgr.h
:functions: fpga_image_info
.. kernel-doc:: include/linux/fpga/fpga-mgr.h
:functions: fpga_mgr_states
.. kernel-doc:: drivers/fpga/fpga-mgr.c
:functions: fpga_image_info_alloc
.. kernel-doc:: drivers/fpga/fpga-mgr.c
:functions: fpga_image_info_free
.. kernel-doc:: drivers/fpga/fpga-mgr.c
:functions: of_fpga_mgr_get
.. kernel-doc:: drivers/fpga/fpga-mgr.c
:functions: fpga_mgr_get
.. kernel-doc:: drivers/fpga/fpga-mgr.c
:functions: fpga_mgr_put
.. kernel-doc:: drivers/fpga/fpga-mgr.c
:functions: fpga_mgr_lock
.. kernel-doc:: drivers/fpga/fpga-mgr.c
:functions: fpga_mgr_unlock
.. kernel-doc:: include/linux/fpga/fpga-mgr.h
:functions: fpga_mgr_states
Note - use :c:func:`fpga_region_program_fpga()` instead of :c:func:`fpga_mgr_load()`
.. kernel-doc:: drivers/fpga/fpga-mgr.c
:functions: fpga_mgr_load

View File

@ -0,0 +1,107 @@
In-kernel API for FPGA Programming
==================================
Overview
--------
The in-kernel API for FPGA programming is a combination of APIs from
FPGA manager, bridge, and regions. The actual function used to
trigger FPGA programming is :c:func:`fpga_region_program_fpga()`.
:c:func:`fpga_region_program_fpga()` uses functionality supplied by
the FPGA manager and bridges. It will:
* lock the region's mutex
* lock the mutex of the region's FPGA manager
* build a list of FPGA bridges if a method has been specified to do so
* disable the bridges
* program the FPGA using info passed in :c:member:`fpga_region->info`.
* re-enable the bridges
* release the locks
The struct fpga_image_info specifies what FPGA image to program. It is
allocated/freed by :c:func:`fpga_image_info_alloc()` and freed with
:c:func:`fpga_image_info_free()`
How to program an FPGA using a region
-------------------------------------
When the FPGA region driver probed, it was given a pointer to an FPGA manager
driver so it knows which manager to use. The region also either has a list of
bridges to control during programming or it has a pointer to a function that
will generate that list. Here's some sample code of what to do next::
#include <linux/fpga/fpga-mgr.h>
#include <linux/fpga/fpga-region.h>
struct fpga_image_info *info;
int ret;
/*
* First, alloc the struct with information about the FPGA image to
* program.
*/
info = fpga_image_info_alloc(dev);
if (!info)
return -ENOMEM;
/* Set flags as needed, such as: */
info->flags = FPGA_MGR_PARTIAL_RECONFIG;
/*
* Indicate where the FPGA image is. This is pseudo-code; you're
* going to use one of these three.
*/
if (image is in a scatter gather table) {
info->sgt = [your scatter gather table]
} else if (image is in a buffer) {
info->buf = [your image buffer]
info->count = [image buffer size]
} else if (image is in a firmware file) {
info->firmware_name = devm_kstrdup(dev, firmware_name,
GFP_KERNEL);
}
/* Add info to region and do the programming */
region->info = info;
ret = fpga_region_program_fpga(region);
/* Deallocate the image info if you're done with it */
region->info = NULL;
fpga_image_info_free(info);
if (ret)
return ret;
/* Now enumerate whatever hardware has appeared in the FPGA. */
API for programming an FPGA
---------------------------
* :c:func:`fpga_region_program_fpga` — Program an FPGA
* :c:type:`fpga_image_info` — Specifies what FPGA image to program
* :c:func:`fpga_image_info_alloc()` — Allocate an FPGA image info struct
* :c:func:`fpga_image_info_free()` — Free an FPGA image info struct
.. kernel-doc:: drivers/fpga/fpga-region.c
:functions: fpga_region_program_fpga
FPGA Manager flags
.. kernel-doc:: include/linux/fpga/fpga-mgr.h
:doc: FPGA Manager flags
.. kernel-doc:: include/linux/fpga/fpga-mgr.h
:functions: fpga_image_info
.. kernel-doc:: drivers/fpga/fpga-mgr.c
:functions: fpga_image_info_alloc
.. kernel-doc:: drivers/fpga/fpga-mgr.c
:functions: fpga_image_info_free

View File

@ -34,41 +34,6 @@ fpga_image_info including:
* flags indicating specifics such as whether the image is for partial
reconfiguration.
How to program an FPGA using a region
-------------------------------------
First, allocate the info struct::
info = fpga_image_info_alloc(dev);
if (!info)
return -ENOMEM;
Set flags as needed, i.e.::
info->flags |= FPGA_MGR_PARTIAL_RECONFIG;
Point to your FPGA image, such as::
info->sgt = &sgt;
Add info to region and do the programming::
region->info = info;
ret = fpga_region_program_fpga(region);
:c:func:`fpga_region_program_fpga()` operates on info passed in the
fpga_image_info (region->info). This function will attempt to:
* lock the region's mutex
* lock the region's FPGA manager
* build a list of FPGA bridges if a method has been specified to do so
* disable the bridges
* program the FPGA
* re-enable the bridges
* release the locks
Then you will want to enumerate whatever hardware has appeared in the FPGA.
How to add a new FPGA region
----------------------------
@ -77,26 +42,62 @@ An example of usage can be seen in the probe function of [#f2]_.
.. [#f1] ../devicetree/bindings/fpga/fpga-region.txt
.. [#f2] ../../drivers/fpga/of-fpga-region.c
API to program an FPGA
----------------------
.. kernel-doc:: drivers/fpga/fpga-region.c
:functions: fpga_region_program_fpga
API to add a new FPGA region
----------------------------
* struct :c:type:`fpga_region` — The FPGA region struct
* :c:func:`devm_fpga_region_create` — Allocate and init a region struct
* :c:func:`fpga_region_register` — Register an FPGA region
* :c:func:`fpga_region_unregister` — Unregister an FPGA region
The FPGA region's probe function will need to get a reference to the FPGA
Manager it will be using to do the programming. This usually would happen
during the region's probe function.
* :c:func:`fpga_mgr_get` — Get a reference to an FPGA manager, raise ref count
* :c:func:`of_fpga_mgr_get` — Get a reference to an FPGA manager, raise ref count,
given a device node.
* :c:func:`fpga_mgr_put` — Put an FPGA manager
The FPGA region will need to specify which bridges to control while programming
the FPGA. The region driver can build a list of bridges during probe time
(:c:member:`fpga_region->bridge_list`) or it can have a function that creates
the list of bridges to program just before programming
(:c:member:`fpga_region->get_bridges`). The FPGA bridge framework supplies the
following APIs to handle building or tearing down that list.
* :c:func:`fpga_bridge_get_to_list` — Get a ref of an FPGA bridge, add it to a
list
* :c:func:`of_fpga_bridge_get_to_list` — Get a ref of an FPGA bridge, add it to a
list, given a device node
* :c:func:`fpga_bridges_put` — Given a list of bridges, put them
.. kernel-doc:: include/linux/fpga/fpga-region.h
:functions: fpga_region
.. kernel-doc:: drivers/fpga/fpga-region.c
:functions: fpga_region_create
.. kernel-doc:: drivers/fpga/fpga-region.c
:functions: fpga_region_free
:functions: devm_fpga_region_create
.. kernel-doc:: drivers/fpga/fpga-region.c
:functions: fpga_region_register
.. kernel-doc:: drivers/fpga/fpga-region.c
:functions: fpga_region_unregister
.. kernel-doc:: drivers/fpga/fpga-mgr.c
:functions: fpga_mgr_get
.. kernel-doc:: drivers/fpga/fpga-mgr.c
:functions: of_fpga_mgr_get
.. kernel-doc:: drivers/fpga/fpga-mgr.c
:functions: fpga_mgr_put
.. kernel-doc:: drivers/fpga/fpga-bridge.c
:functions: fpga_bridge_get_to_list
.. kernel-doc:: drivers/fpga/fpga-bridge.c
:functions: of_fpga_bridge_get_to_list
.. kernel-doc:: drivers/fpga/fpga-bridge.c
:functions: fpga_bridges_put

View File

@ -11,3 +11,5 @@ FPGA Subsystem
fpga-mgr
fpga-bridge
fpga-region
fpga-programming

View File

@ -44,7 +44,7 @@ FPGA Region
-----------
If you are adding a new interface to the FPGA framework, add it on top
of an FPGA region to allow the most reuse of your interface.
of an FPGA region.
The FPGA Region framework (fpga-region.c) associates managers and
bridges as reconfigurable regions. A region may refer to the whole

View File

@ -101,6 +101,34 @@ interface. ::
+--------------------+ | |
+----------------+
Example 5: Stereo Stream with L and R channel is rendered by 2 Masters, each
rendering one channel, and is received by two different Slaves, each
receiving one channel. Both Masters and both Slaves are using single port. ::
+---------------+ Clock Signal +---------------+
| Master +----------------------------------+ Slave |
| Interface | | Interface |
| 1 | | 1 |
| | Data Signal | |
| L +----------------------------------+ L |
| (Data) | Data Direction | (Data) |
+---------------+ +-----------------------> +---------------+
+---------------+ Clock Signal +---------------+
| Master +----------------------------------+ Slave |
| Interface | | Interface |
| 2 | | 2 |
| | Data Signal | |
| R +----------------------------------+ R |
| (Data) | Data Direction | (Data) |
+---------------+ +-----------------------> +---------------+
Note: In multi-link cases like above, to lock, one would acquire a global
lock and then go on locking bus instances. But, in this case the caller
framework(ASoC DPCM) guarantees that stream operations on a card are
always serialized. So, there is no race condition and hence no need for
global lock.
SoundWire Stream Management flow
================================
@ -174,6 +202,7 @@ per stream. From ASoC DPCM framework, this stream state maybe linked to
.startup() operation.
.. code-block:: c
int sdw_alloc_stream(char * stream_name);
@ -200,6 +229,7 @@ only be invoked once by respective Master(s) and Slave(s). From ASoC DPCM
framework, this stream state is linked to .hw_params() operation.
.. code-block:: c
int sdw_stream_add_master(struct sdw_bus * bus,
struct sdw_stream_config * stream_config,
struct sdw_ports_config * ports_config,
@ -245,6 +275,7 @@ stream. From ASoC DPCM framework, this stream state is linked to
.prepare() operation.
.. code-block:: c
int sdw_prepare_stream(struct sdw_stream_runtime * stream);
@ -274,6 +305,7 @@ stream. From ASoC DPCM framework, this stream state is linked to
.trigger() start operation.
.. code-block:: c
int sdw_enable_stream(struct sdw_stream_runtime * stream);
SDW_STREAM_DISABLED
@ -301,6 +333,7 @@ per stream. From ASoC DPCM framework, this stream state is linked to
.trigger() stop operation.
.. code-block:: c
int sdw_disable_stream(struct sdw_stream_runtime * stream);
@ -325,6 +358,7 @@ per stream. From ASoC DPCM framework, this stream state is linked to
.trigger() stop operation.
.. code-block:: c
int sdw_deprepare_stream(struct sdw_stream_runtime * stream);
@ -349,6 +383,7 @@ all the Master(s) and Slave(s) associated with stream. From ASoC DPCM
framework, this stream state is linked to .hw_free() operation.
.. code-block:: c
int sdw_stream_remove_master(struct sdw_bus * bus,
struct sdw_stream_runtime * stream);
int sdw_stream_remove_slave(struct sdw_slave * slave,
@ -361,6 +396,7 @@ stream assigned as part of ALLOCATED state.
In .shutdown() the data structure maintaining stream state are freed up.
.. code-block:: c
void sdw_release_stream(struct sdw_stream_runtime * stream);
Not Supported

View File

@ -463,8 +463,8 @@ Getting information about your UIO device
Information about all UIO devices is available in sysfs. The first thing
you should do in your driver is check ``name`` and ``version`` to make
sure your talking to the right device and that its kernel driver has the
version you expect.
sure you're talking to the right device and that its kernel driver has
the version you expect.
You should also make sure that the memory mapping you need exists and
has the size you expect.

View File

@ -58,6 +58,37 @@ static int qfprom_probe(struct platform_device *pdev)
It is mandatory that the NVMEM provider has a regmap associated with its
struct device. Failure to do would return error code from nvmem_register().
Users of board files can define and register nvmem cells using the
nvmem_cell_table struct:
static struct nvmem_cell_info foo_nvmem_cells[] = {
{
.name = "macaddr",
.offset = 0x7f00,
.bytes = ETH_ALEN,
}
};
static struct nvmem_cell_table foo_nvmem_cell_table = {
.nvmem_name = "i2c-eeprom",
.cells = foo_nvmem_cells,
.ncells = ARRAY_SIZE(foo_nvmem_cells),
};
nvmem_add_cell_table(&foo_nvmem_cell_table);
Additionally it is possible to create nvmem cell lookup entries and register
them with the nvmem framework from machine code as shown in the example below:
static struct nvmem_cell_lookup foo_nvmem_lookup = {
.nvmem_name = "i2c-eeprom",
.cell_name = "macaddr",
.dev_id = "foo_mac.0",
.con_id = "mac-address",
};
nvmem_add_cell_lookups(&foo_nvmem_lookup, 1);
NVMEM Consumers
+++++++++++++++

View File

@ -1,3 +1,5 @@
.. SPDX-License-Identifier: GPL-2.0
===================
System Trace Module
===================
@ -53,12 +55,30 @@ under "user" directory from the example above and this new rule will
be used for trace sources with the id string of "user/dummy".
Trace sources have to open the stm class device's node and write their
trace data into its file descriptor. In order to identify themselves
to the policy, they need to do a STP_POLICY_ID_SET ioctl on this file
descriptor providing their id string. Otherwise, they will be
automatically allocated a master/channel pair upon first write to this
file descriptor according to the "default" rule of the policy, if such
exists.
trace data into its file descriptor.
In order to find an appropriate policy node for a given trace source,
several mechanisms can be used. First, a trace source can explicitly
identify itself by calling an STP_POLICY_ID_SET ioctl on the character
device's file descriptor, providing their id string, before they write
any data there. Secondly, if they chose not to perform the explicit
identification (because you may not want to patch existing software
to do this), they can just start writing the data, at which point the
stm core will try to find a policy node with the name matching the
task's name (e.g., "syslogd") and if one exists, it will be used.
Thirdly, if the task name can't be found among the policy nodes, the
catch-all entry "default" will be used, if it exists. This entry also
needs to be created and configured by the system administrator or
whatever tools are taking care of the policy configuration. Finally,
if all the above steps failed, the write() to an stm file descriptor
will return a error (EINVAL).
Previously, if no policy nodes were found for a trace source, the stm
class would silently fall back to allocating the first available
contiguous range of master/channels from the beginning of the device's
master/channel range. The new requirement for a policy node to exist
will help programmers and sysadmins identify gaps in configuration
and have better control over the un-identified sources.
Some STM devices may allow direct mapping of the channel mmio regions
to userspace for zero-copy writing. One mappable page (in terms of
@ -92,9 +112,9 @@ allocated for the device according to the policy configuration. If
there's a node in the root of the policy directory that matches the
stm_source device's name (for example, "console"), this node will be
used to allocate master and channel numbers. If there's no such policy
node, the stm core will pick the first contiguous chunk of channels
within the first available master. Note that the node must exist
before the stm_source device is connected to its stm device.
node, the stm core will use the catch-all entry "default", if one
exists. If neither policy nodes exist, the write() to stm_source_link
will return an error.
stm_console
===========

View File

@ -0,0 +1,62 @@
.. SPDX-License-Identifier: GPL-2.0
===================
MIPI SyS-T over STP
===================
The MIPI SyS-T protocol driver can be used with STM class devices to
generate standardized trace stream. Aside from being a standard, it
provides better trace source identification and timestamp correlation.
In order to use the MIPI SyS-T protocol driver with your STM device,
first, you'll need CONFIG_STM_PROTO_SYS_T.
Now, you can select which protocol driver you want to use when you create
a policy for your STM device, by specifying it in the policy name:
# mkdir /config/stp-policy/dummy_stm.0:p_sys-t.my-policy/
In other words, the policy name format is extended like this:
<device_name>:<protocol_name>.<policy_name>
With Intel TH, therefore it can look like "0-sth:p_sys-t.my-policy".
If the protocol name is omitted, the STM class will chose whichever
protocol driver was loaded first.
You can also double check that everything is working as expected by
# cat /config/stp-policy/dummy_stm.0:p_sys-t.my-policy/protocol
p_sys-t
Now, with the MIPI SyS-T protocol driver, each policy node in the
configfs gets a few additional attributes, which determine per-source
parameters specific to the protocol:
# mkdir /config/stp-policy/dummy_stm.0:p_sys-t.my-policy/default
# ls /config/stp-policy/dummy_stm.0:p_sys-t.my-policy/default
channels
clocksync_interval
do_len
masters
ts_interval
uuid
The most important one here is the "uuid", which determines the UUID
that will be used to tag all data coming from this source. It is
automatically generated when a new node is created, but it is likely
that you would want to change it.
do_len switches on/off the additional "payload length" field in the
MIPI SyS-T message header. It is off by default as the STP already
marks message boundaries.
ts_interval and clocksync_interval determine how much time in milliseconds
can pass before we need to include a protocol (not transport, aka STP)
timestamp in a message header or send a CLOCKSYNC packet, respectively.
See Documentation/ABI/testing/configfs-stp-policy-p_sys-t for more
details.
* [1] https://www.mipi.org/specifications/sys-t

View File

@ -932,6 +932,7 @@ M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
M: Arve Hjønnevåg <arve@android.com>
M: Todd Kjos <tkjos@android.com>
M: Martijn Coenen <maco@android.com>
M: Joel Fernandes <joel@joelfernandes.org>
T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging.git
L: devel@driverdev.osuosl.org
S: Supported
@ -13757,7 +13758,7 @@ F: sound/soc/
F: include/sound/soc*
SOUNDWIRE SUBSYSTEM
M: Vinod Koul <vinod.koul@intel.com>
M: Vinod Koul <vkoul@kernel.org>
M: Sanyog Kale <sanyog.r.kale@intel.com>
R: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
L: alsa-devel@alsa-project.org (moderated for non-subscribers)
@ -15523,13 +15524,19 @@ F: arch/x86/um/
F: fs/hostfs/
F: fs/hppfs/
USERSPACE COPYIN/COPYOUT (UIOVEC)
M: Alexander Viro <viro@zeniv.linux.org.uk>
S: Maintained
F: lib/iov_iter.c
F: include/linux/uio.h
USERSPACE I/O (UIO)
M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc.git
F: Documentation/driver-api/uio-howto.rst
F: drivers/uio/
F: include/linux/uio*.h
F: include/linux/uio_driver.h
UTIL-LINUX PACKAGE
M: Karel Zak <kzak@redhat.com>

View File

@ -10,7 +10,7 @@ if ANDROID
config ANDROID_BINDER_IPC
bool "Android Binder IPC Driver"
depends on MMU
depends on MMU && !CPU_CACHE_VIVT
default n
---help---
Binder is used in Android for both communication between processes,

View File

@ -71,6 +71,7 @@
#include <linux/security.h>
#include <linux/spinlock.h>
#include <linux/ratelimit.h>
#include <linux/syscalls.h>
#include <uapi/linux/android/binder.h>
@ -457,9 +458,8 @@ struct binder_ref {
};
enum binder_deferred_state {
BINDER_DEFERRED_PUT_FILES = 0x01,
BINDER_DEFERRED_FLUSH = 0x02,
BINDER_DEFERRED_RELEASE = 0x04,
BINDER_DEFERRED_FLUSH = 0x01,
BINDER_DEFERRED_RELEASE = 0x02,
};
/**
@ -480,9 +480,6 @@ enum binder_deferred_state {
* (invariant after initialized)
* @tsk task_struct for group_leader of process
* (invariant after initialized)
* @files files_struct for process
* (protected by @files_lock)
* @files_lock mutex to protect @files
* @deferred_work_node: element for binder_deferred_list
* (protected by binder_deferred_lock)
* @deferred_work: bitmap of deferred work to perform
@ -527,8 +524,6 @@ struct binder_proc {
struct list_head waiting_threads;
int pid;
struct task_struct *tsk;
struct files_struct *files;
struct mutex files_lock;
struct hlist_node deferred_work_node;
int deferred_work;
bool is_dead;
@ -611,6 +606,23 @@ struct binder_thread {
bool is_dead;
};
/**
* struct binder_txn_fd_fixup - transaction fd fixup list element
* @fixup_entry: list entry
* @file: struct file to be associated with new fd
* @offset: offset in buffer data to this fixup
*
* List element for fd fixups in a transaction. Since file
* descriptors need to be allocated in the context of the
* target process, we pass each fd to be processed in this
* struct.
*/
struct binder_txn_fd_fixup {
struct list_head fixup_entry;
struct file *file;
size_t offset;
};
struct binder_transaction {
int debug_id;
struct binder_work work;
@ -628,6 +640,7 @@ struct binder_transaction {
long priority;
long saved_priority;
kuid_t sender_euid;
struct list_head fd_fixups;
/**
* @lock: protects @from, @to_proc, and @to_thread
*
@ -822,6 +835,7 @@ static void
binder_enqueue_deferred_thread_work_ilocked(struct binder_thread *thread,
struct binder_work *work)
{
WARN_ON(!list_empty(&thread->waiting_thread_node));
binder_enqueue_work_ilocked(work, &thread->todo);
}
@ -839,6 +853,7 @@ static void
binder_enqueue_thread_work_ilocked(struct binder_thread *thread,
struct binder_work *work)
{
WARN_ON(!list_empty(&thread->waiting_thread_node));
binder_enqueue_work_ilocked(work, &thread->todo);
thread->process_todo = true;
}
@ -920,66 +935,6 @@ static void binder_free_thread(struct binder_thread *thread);
static void binder_free_proc(struct binder_proc *proc);
static void binder_inc_node_tmpref_ilocked(struct binder_node *node);
static int task_get_unused_fd_flags(struct binder_proc *proc, int flags)
{
unsigned long rlim_cur;
unsigned long irqs;
int ret;
mutex_lock(&proc->files_lock);
if (proc->files == NULL) {
ret = -ESRCH;
goto err;
}
if (!lock_task_sighand(proc->tsk, &irqs)) {
ret = -EMFILE;
goto err;
}
rlim_cur = task_rlimit(proc->tsk, RLIMIT_NOFILE);
unlock_task_sighand(proc->tsk, &irqs);
ret = __alloc_fd(proc->files, 0, rlim_cur, flags);
err:
mutex_unlock(&proc->files_lock);
return ret;
}
/*
* copied from fd_install
*/
static void task_fd_install(
struct binder_proc *proc, unsigned int fd, struct file *file)
{
mutex_lock(&proc->files_lock);
if (proc->files)
__fd_install(proc->files, fd, file);
mutex_unlock(&proc->files_lock);
}
/*
* copied from sys_close
*/
static long task_close_fd(struct binder_proc *proc, unsigned int fd)
{
int retval;
mutex_lock(&proc->files_lock);
if (proc->files == NULL) {
retval = -ESRCH;
goto err;
}
retval = __close_fd(proc->files, fd);
/* can't restart close syscall because file table entry was cleared */
if (unlikely(retval == -ERESTARTSYS ||
retval == -ERESTARTNOINTR ||
retval == -ERESTARTNOHAND ||
retval == -ERESTART_RESTARTBLOCK))
retval = -EINTR;
err:
mutex_unlock(&proc->files_lock);
return retval;
}
static bool binder_has_work_ilocked(struct binder_thread *thread,
bool do_proc_work)
{
@ -1270,19 +1225,12 @@ static int binder_inc_node_nilocked(struct binder_node *node, int strong,
} else
node->local_strong_refs++;
if (!node->has_strong_ref && target_list) {
struct binder_thread *thread = container_of(target_list,
struct binder_thread, todo);
binder_dequeue_work_ilocked(&node->work);
/*
* Note: this function is the only place where we queue
* directly to a thread->todo without using the
* corresponding binder_enqueue_thread_work() helper
* functions; in this case it's ok to not set the
* process_todo flag, since we know this node work will
* always be followed by other work that starts queue
* processing: in case of synchronous transactions, a
* BR_REPLY or BR_ERROR; in case of oneway
* transactions, a BR_TRANSACTION_COMPLETE.
*/
binder_enqueue_work_ilocked(&node->work, target_list);
BUG_ON(&thread->todo != target_list);
binder_enqueue_deferred_thread_work_ilocked(thread,
&node->work);
}
} else {
if (!internal)
@ -1958,10 +1906,32 @@ static struct binder_thread *binder_get_txn_from_and_acq_inner(
return NULL;
}
/**
* binder_free_txn_fixups() - free unprocessed fd fixups
* @t: binder transaction for t->from
*
* If the transaction is being torn down prior to being
* processed by the target process, free all of the
* fd fixups and fput the file structs. It is safe to
* call this function after the fixups have been
* processed -- in that case, the list will be empty.
*/
static void binder_free_txn_fixups(struct binder_transaction *t)
{
struct binder_txn_fd_fixup *fixup, *tmp;
list_for_each_entry_safe(fixup, tmp, &t->fd_fixups, fixup_entry) {
fput(fixup->file);
list_del(&fixup->fixup_entry);
kfree(fixup);
}
}
static void binder_free_transaction(struct binder_transaction *t)
{
if (t->buffer)
t->buffer->transaction = NULL;
binder_free_txn_fixups(t);
kfree(t);
binder_stats_deleted(BINDER_STAT_TRANSACTION);
}
@ -2262,12 +2232,17 @@ static void binder_transaction_buffer_release(struct binder_proc *proc,
} break;
case BINDER_TYPE_FD: {
struct binder_fd_object *fp = to_binder_fd_object(hdr);
binder_debug(BINDER_DEBUG_TRANSACTION,
" fd %d\n", fp->fd);
if (failed_at)
task_close_fd(proc, fp->fd);
/*
* No need to close the file here since user-space
* closes it for for successfully delivered
* transactions. For transactions that weren't
* delivered, the new fd was never allocated so
* there is no need to close and the fput on the
* file is done when the transaction is torn
* down.
*/
WARN_ON(failed_at &&
proc->tsk == current->group_leader);
} break;
case BINDER_TYPE_PTR:
/*
@ -2283,6 +2258,15 @@ static void binder_transaction_buffer_release(struct binder_proc *proc,
size_t fd_index;
binder_size_t fd_buf_size;
if (proc->tsk != current->group_leader) {
/*
* Nothing to do if running in sender context
* The fd fixups have not been applied so no
* fds need to be closed.
*/
continue;
}
fda = to_binder_fd_array_object(hdr);
parent = binder_validate_ptr(buffer, fda->parent,
off_start,
@ -2315,7 +2299,7 @@ static void binder_transaction_buffer_release(struct binder_proc *proc,
}
fd_array = (u32 *)(parent_buffer + (uintptr_t)fda->parent_offset);
for (fd_index = 0; fd_index < fda->num_fds; fd_index++)
task_close_fd(proc, fd_array[fd_index]);
ksys_close(fd_array[fd_index]);
} break;
default:
pr_err("transaction release %d bad object type %x\n",
@ -2447,17 +2431,18 @@ done:
return ret;
}
static int binder_translate_fd(int fd,
static int binder_translate_fd(u32 *fdp,
struct binder_transaction *t,
struct binder_thread *thread,
struct binder_transaction *in_reply_to)
{
struct binder_proc *proc = thread->proc;
struct binder_proc *target_proc = t->to_proc;
int target_fd;
struct binder_txn_fd_fixup *fixup;
struct file *file;
int ret;
int ret = 0;
bool target_allows_fd;
int fd = *fdp;
if (in_reply_to)
target_allows_fd = !!(in_reply_to->flags & TF_ACCEPT_FDS);
@ -2485,19 +2470,24 @@ static int binder_translate_fd(int fd,
goto err_security;
}
target_fd = task_get_unused_fd_flags(target_proc, O_CLOEXEC);
if (target_fd < 0) {
/*
* Add fixup record for this transaction. The allocation
* of the fd in the target needs to be done from a
* target thread.
*/
fixup = kzalloc(sizeof(*fixup), GFP_KERNEL);
if (!fixup) {
ret = -ENOMEM;
goto err_get_unused_fd;
goto err_alloc;
}
task_fd_install(target_proc, target_fd, file);
trace_binder_transaction_fd(t, fd, target_fd);
binder_debug(BINDER_DEBUG_TRANSACTION, " fd %d -> %d\n",
fd, target_fd);
fixup->file = file;
fixup->offset = (uintptr_t)fdp - (uintptr_t)t->buffer->data;
trace_binder_transaction_fd_send(t, fd, fixup->offset);
list_add_tail(&fixup->fixup_entry, &t->fd_fixups);
return target_fd;
return ret;
err_get_unused_fd:
err_alloc:
err_security:
fput(file);
err_fget:
@ -2511,8 +2501,7 @@ static int binder_translate_fd_array(struct binder_fd_array_object *fda,
struct binder_thread *thread,
struct binder_transaction *in_reply_to)
{
binder_size_t fdi, fd_buf_size, num_installed_fds;
int target_fd;
binder_size_t fdi, fd_buf_size;
uintptr_t parent_buffer;
u32 *fd_array;
struct binder_proc *proc = thread->proc;
@ -2544,23 +2533,12 @@ static int binder_translate_fd_array(struct binder_fd_array_object *fda,
return -EINVAL;
}
for (fdi = 0; fdi < fda->num_fds; fdi++) {
target_fd = binder_translate_fd(fd_array[fdi], t, thread,
int ret = binder_translate_fd(&fd_array[fdi], t, thread,
in_reply_to);
if (target_fd < 0)
goto err_translate_fd_failed;
fd_array[fdi] = target_fd;
if (ret < 0)
return ret;
}
return 0;
err_translate_fd_failed:
/*
* Failed to allocate fd or security error, free fds
* installed so far.
*/
num_installed_fds = fdi;
for (fdi = 0; fdi < num_installed_fds; fdi++)
task_close_fd(target_proc, fd_array[fdi]);
return target_fd;
}
static int binder_fixup_parent(struct binder_transaction *t,
@ -2723,6 +2701,7 @@ static void binder_transaction(struct binder_proc *proc,
{
int ret;
struct binder_transaction *t;
struct binder_work *w;
struct binder_work *tcomplete;
binder_size_t *offp, *off_end, *off_start;
binder_size_t off_min;
@ -2864,6 +2843,29 @@ static void binder_transaction(struct binder_proc *proc,
goto err_invalid_target_handle;
}
binder_inner_proc_lock(proc);
w = list_first_entry_or_null(&thread->todo,
struct binder_work, entry);
if (!(tr->flags & TF_ONE_WAY) && w &&
w->type == BINDER_WORK_TRANSACTION) {
/*
* Do not allow new outgoing transaction from a
* thread that has a transaction at the head of
* its todo list. Only need to check the head
* because binder_select_thread_ilocked picks a
* thread from proc->waiting_threads to enqueue
* the transaction, and nothing is queued to the
* todo list while the thread is on waiting_threads.
*/
binder_user_error("%d:%d new transaction not allowed when there is a transaction on thread todo\n",
proc->pid, thread->pid);
binder_inner_proc_unlock(proc);
return_error = BR_FAILED_REPLY;
return_error_param = -EPROTO;
return_error_line = __LINE__;
goto err_bad_todo_list;
}
if (!(tr->flags & TF_ONE_WAY) && thread->transaction_stack) {
struct binder_transaction *tmp;
@ -2911,6 +2913,7 @@ static void binder_transaction(struct binder_proc *proc,
return_error_line = __LINE__;
goto err_alloc_t_failed;
}
INIT_LIST_HEAD(&t->fd_fixups);
binder_stats_created(BINDER_STAT_TRANSACTION);
spin_lock_init(&t->lock);
@ -3066,17 +3069,16 @@ static void binder_transaction(struct binder_proc *proc,
case BINDER_TYPE_FD: {
struct binder_fd_object *fp = to_binder_fd_object(hdr);
int target_fd = binder_translate_fd(fp->fd, t, thread,
in_reply_to);
int ret = binder_translate_fd(&fp->fd, t, thread,
in_reply_to);
if (target_fd < 0) {
if (ret < 0) {
return_error = BR_FAILED_REPLY;
return_error_param = target_fd;
return_error_param = ret;
return_error_line = __LINE__;
goto err_translate_failed;
}
fp->pad_binder = 0;
fp->fd = target_fd;
} break;
case BINDER_TYPE_FDA: {
struct binder_fd_array_object *fda =
@ -3233,6 +3235,7 @@ err_bad_object_type:
err_bad_offset:
err_bad_parent:
err_copy_data_failed:
binder_free_txn_fixups(t);
trace_binder_transaction_failed_buffer_release(t->buffer);
binder_transaction_buffer_release(target_proc, t->buffer, offp);
if (target_node)
@ -3247,6 +3250,7 @@ err_alloc_tcomplete_failed:
kfree(t);
binder_stats_deleted(BINDER_STAT_TRANSACTION);
err_alloc_t_failed:
err_bad_todo_list:
err_bad_call_stack:
err_empty_call_stack:
err_dead_binder:
@ -3294,6 +3298,47 @@ err_invalid_target_handle:
}
}
/**
* binder_free_buf() - free the specified buffer
* @proc: binder proc that owns buffer
* @buffer: buffer to be freed
*
* If buffer for an async transaction, enqueue the next async
* transaction from the node.
*
* Cleanup buffer and free it.
*/
static void
binder_free_buf(struct binder_proc *proc, struct binder_buffer *buffer)
{
if (buffer->transaction) {
buffer->transaction->buffer = NULL;
buffer->transaction = NULL;
}
if (buffer->async_transaction && buffer->target_node) {
struct binder_node *buf_node;
struct binder_work *w;
buf_node = buffer->target_node;
binder_node_inner_lock(buf_node);
BUG_ON(!buf_node->has_async_transaction);
BUG_ON(buf_node->proc != proc);
w = binder_dequeue_work_head_ilocked(
&buf_node->async_todo);
if (!w) {
buf_node->has_async_transaction = false;
} else {
binder_enqueue_work_ilocked(
w, &proc->todo);
binder_wakeup_proc_ilocked(proc);
}
binder_node_inner_unlock(buf_node);
}
trace_binder_transaction_buffer_release(buffer);
binder_transaction_buffer_release(proc, buffer, NULL);
binder_alloc_free_buf(&proc->alloc, buffer);
}
static int binder_thread_write(struct binder_proc *proc,
struct binder_thread *thread,
binder_uintptr_t binder_buffer, size_t size,
@ -3480,33 +3525,7 @@ static int binder_thread_write(struct binder_proc *proc,
proc->pid, thread->pid, (u64)data_ptr,
buffer->debug_id,
buffer->transaction ? "active" : "finished");
if (buffer->transaction) {
buffer->transaction->buffer = NULL;
buffer->transaction = NULL;
}
if (buffer->async_transaction && buffer->target_node) {
struct binder_node *buf_node;
struct binder_work *w;
buf_node = buffer->target_node;
binder_node_inner_lock(buf_node);
BUG_ON(!buf_node->has_async_transaction);
BUG_ON(buf_node->proc != proc);
w = binder_dequeue_work_head_ilocked(
&buf_node->async_todo);
if (!w) {
buf_node->has_async_transaction = false;
} else {
binder_enqueue_work_ilocked(
w, &proc->todo);
binder_wakeup_proc_ilocked(proc);
}
binder_node_inner_unlock(buf_node);
}
trace_binder_transaction_buffer_release(buffer);
binder_transaction_buffer_release(proc, buffer, NULL);
binder_alloc_free_buf(&proc->alloc, buffer);
binder_free_buf(proc, buffer);
break;
}
@ -3829,6 +3848,76 @@ static int binder_wait_for_work(struct binder_thread *thread,
return ret;
}
/**
* binder_apply_fd_fixups() - finish fd translation
* @t: binder transaction with list of fd fixups
*
* Now that we are in the context of the transaction target
* process, we can allocate and install fds. Process the
* list of fds to translate and fixup the buffer with the
* new fds.
*
* If we fail to allocate an fd, then free the resources by
* fput'ing files that have not been processed and ksys_close'ing
* any fds that have already been allocated.
*/
static int binder_apply_fd_fixups(struct binder_transaction *t)
{
struct binder_txn_fd_fixup *fixup, *tmp;
int ret = 0;
list_for_each_entry(fixup, &t->fd_fixups, fixup_entry) {
int fd = get_unused_fd_flags(O_CLOEXEC);
u32 *fdp;
if (fd < 0) {
binder_debug(BINDER_DEBUG_TRANSACTION,
"failed fd fixup txn %d fd %d\n",
t->debug_id, fd);
ret = -ENOMEM;
break;
}
binder_debug(BINDER_DEBUG_TRANSACTION,
"fd fixup txn %d fd %d\n",
t->debug_id, fd);
trace_binder_transaction_fd_recv(t, fd, fixup->offset);
fd_install(fd, fixup->file);
fixup->file = NULL;
fdp = (u32 *)(t->buffer->data + fixup->offset);
/*
* This store can cause problems for CPUs with a
* VIVT cache (eg ARMv5) since the cache cannot
* detect virtual aliases to the same physical cacheline.
* To support VIVT, this address and the user-space VA
* would both need to be flushed. Since this kernel
* VA is not constructed via page_to_virt(), we can't
* use flush_dcache_page() on it, so we'd have to use
* an internal function. If devices with VIVT ever
* need to run Android, we'll either need to go back
* to patching the translated fd from the sender side
* (using the non-standard kernel functions), or rework
* how the kernel uses the buffer to use page_to_virt()
* addresses instead of allocating in our own vm area.
*
* For now, we disable compilation if CONFIG_CPU_CACHE_VIVT.
*/
*fdp = fd;
}
list_for_each_entry_safe(fixup, tmp, &t->fd_fixups, fixup_entry) {
if (fixup->file) {
fput(fixup->file);
} else if (ret) {
u32 *fdp = (u32 *)(t->buffer->data + fixup->offset);
ksys_close(*fdp);
}
list_del(&fixup->fixup_entry);
kfree(fixup);
}
return ret;
}
static int binder_thread_read(struct binder_proc *proc,
struct binder_thread *thread,
binder_uintptr_t binder_buffer, size_t size,
@ -4110,6 +4199,34 @@ retry:
tr.sender_pid = 0;
}
ret = binder_apply_fd_fixups(t);
if (ret) {
struct binder_buffer *buffer = t->buffer;
bool oneway = !!(t->flags & TF_ONE_WAY);
int tid = t->debug_id;
if (t_from)
binder_thread_dec_tmpref(t_from);
buffer->transaction = NULL;
binder_cleanup_transaction(t, "fd fixups failed",
BR_FAILED_REPLY);
binder_free_buf(proc, buffer);
binder_debug(BINDER_DEBUG_FAILED_TRANSACTION,
"%d:%d %stransaction %d fd fixups failed %d/%d, line %d\n",
proc->pid, thread->pid,
oneway ? "async " :
(cmd == BR_REPLY ? "reply " : ""),
tid, BR_FAILED_REPLY, ret, __LINE__);
if (cmd == BR_REPLY) {
cmd = BR_FAILED_REPLY;
if (put_user(cmd, (uint32_t __user *)ptr))
return -EFAULT;
ptr += sizeof(uint32_t);
binder_stat_br(proc, thread, cmd);
break;
}
continue;
}
tr.data_size = t->buffer->data_size;
tr.offsets_size = t->buffer->offsets_size;
tr.data.ptr.buffer = (binder_uintptr_t)
@ -4544,6 +4661,42 @@ out:
return ret;
}
static int binder_ioctl_get_node_info_for_ref(struct binder_proc *proc,
struct binder_node_info_for_ref *info)
{
struct binder_node *node;
struct binder_context *context = proc->context;
__u32 handle = info->handle;
if (info->strong_count || info->weak_count || info->reserved1 ||
info->reserved2 || info->reserved3) {
binder_user_error("%d BINDER_GET_NODE_INFO_FOR_REF: only handle may be non-zero.",
proc->pid);
return -EINVAL;
}
/* This ioctl may only be used by the context manager */
mutex_lock(&context->context_mgr_node_lock);
if (!context->binder_context_mgr_node ||
context->binder_context_mgr_node->proc != proc) {
mutex_unlock(&context->context_mgr_node_lock);
return -EPERM;
}
mutex_unlock(&context->context_mgr_node_lock);
node = binder_get_node_from_ref(proc, handle, true, NULL);
if (!node)
return -EINVAL;
info->strong_count = node->local_strong_refs +
node->internal_strong_refs;
info->weak_count = node->local_weak_refs;
binder_put_node(node);
return 0;
}
static int binder_ioctl_get_node_debug_info(struct binder_proc *proc,
struct binder_node_debug_info *info)
{
@ -4638,6 +4791,25 @@ static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
}
break;
}
case BINDER_GET_NODE_INFO_FOR_REF: {
struct binder_node_info_for_ref info;
if (copy_from_user(&info, ubuf, sizeof(info))) {
ret = -EFAULT;
goto err;
}
ret = binder_ioctl_get_node_info_for_ref(proc, &info);
if (ret < 0)
goto err;
if (copy_to_user(ubuf, &info, sizeof(info))) {
ret = -EFAULT;
goto err;
}
break;
}
case BINDER_GET_NODE_DEBUG_INFO: {
struct binder_node_debug_info info;
@ -4693,7 +4865,6 @@ static void binder_vma_close(struct vm_area_struct *vma)
(vma->vm_end - vma->vm_start) / SZ_1K, vma->vm_flags,
(unsigned long)pgprot_val(vma->vm_page_prot));
binder_alloc_vma_close(&proc->alloc);
binder_defer_work(proc, BINDER_DEFERRED_PUT_FILES);
}
static vm_fault_t binder_vm_fault(struct vm_fault *vmf)
@ -4739,9 +4910,6 @@ static int binder_mmap(struct file *filp, struct vm_area_struct *vma)
ret = binder_alloc_mmap_handler(&proc->alloc, vma);
if (ret)
return ret;
mutex_lock(&proc->files_lock);
proc->files = get_files_struct(current);
mutex_unlock(&proc->files_lock);
return 0;
err_bad_arg:
@ -4765,7 +4933,6 @@ static int binder_open(struct inode *nodp, struct file *filp)
spin_lock_init(&proc->outer_lock);
get_task_struct(current->group_leader);
proc->tsk = current->group_leader;
mutex_init(&proc->files_lock);
INIT_LIST_HEAD(&proc->todo);
proc->default_priority = task_nice(current);
binder_dev = container_of(filp->private_data, struct binder_device,
@ -4915,8 +5082,6 @@ static void binder_deferred_release(struct binder_proc *proc)
struct rb_node *n;
int threads, nodes, incoming_refs, outgoing_refs, active_transactions;
BUG_ON(proc->files);
mutex_lock(&binder_procs_lock);
hlist_del(&proc->proc_node);
mutex_unlock(&binder_procs_lock);
@ -4998,7 +5163,6 @@ static void binder_deferred_release(struct binder_proc *proc)
static void binder_deferred_func(struct work_struct *work)
{
struct binder_proc *proc;
struct files_struct *files;
int defer;
@ -5016,23 +5180,11 @@ static void binder_deferred_func(struct work_struct *work)
}
mutex_unlock(&binder_deferred_lock);
files = NULL;
if (defer & BINDER_DEFERRED_PUT_FILES) {
mutex_lock(&proc->files_lock);
files = proc->files;
if (files)
proc->files = NULL;
mutex_unlock(&proc->files_lock);
}
if (defer & BINDER_DEFERRED_FLUSH)
binder_deferred_flush(proc);
if (defer & BINDER_DEFERRED_RELEASE)
binder_deferred_release(proc); /* frees proc */
if (files)
put_files_struct(files);
} while (proc);
}
static DECLARE_WORK(binder_deferred_work, binder_deferred_func);
@ -5667,12 +5819,11 @@ static int __init binder_init(void)
* Copy the module_parameter string, because we don't want to
* tokenize it in-place.
*/
device_names = kzalloc(strlen(binder_devices_param) + 1, GFP_KERNEL);
device_names = kstrdup(binder_devices_param, GFP_KERNEL);
if (!device_names) {
ret = -ENOMEM;
goto err_alloc_device_names_failed;
}
strcpy(device_names, binder_devices_param);
device_tmp = device_names;
while ((device_name = strsep(&device_tmp, ","))) {

View File

@ -223,22 +223,40 @@ TRACE_EVENT(binder_transaction_ref_to_ref,
__entry->dest_ref_debug_id, __entry->dest_ref_desc)
);
TRACE_EVENT(binder_transaction_fd,
TP_PROTO(struct binder_transaction *t, int src_fd, int dest_fd),
TP_ARGS(t, src_fd, dest_fd),
TRACE_EVENT(binder_transaction_fd_send,
TP_PROTO(struct binder_transaction *t, int fd, size_t offset),
TP_ARGS(t, fd, offset),
TP_STRUCT__entry(
__field(int, debug_id)
__field(int, src_fd)
__field(int, dest_fd)
__field(int, fd)
__field(size_t, offset)
),
TP_fast_assign(
__entry->debug_id = t->debug_id;
__entry->src_fd = src_fd;
__entry->dest_fd = dest_fd;
__entry->fd = fd;
__entry->offset = offset;
),
TP_printk("transaction=%d src_fd=%d ==> dest_fd=%d",
__entry->debug_id, __entry->src_fd, __entry->dest_fd)
TP_printk("transaction=%d src_fd=%d offset=%zu",
__entry->debug_id, __entry->fd, __entry->offset)
);
TRACE_EVENT(binder_transaction_fd_recv,
TP_PROTO(struct binder_transaction *t, int fd, size_t offset),
TP_ARGS(t, fd, offset),
TP_STRUCT__entry(
__field(int, debug_id)
__field(int, fd)
__field(size_t, offset)
),
TP_fast_assign(
__entry->debug_id = t->debug_id;
__entry->fd = fd;
__entry->offset = offset;
),
TP_printk("transaction=%d dest_fd=%d offset=%zu",
__entry->debug_id, __entry->fd, __entry->offset)
);
DECLARE_EVENT_CLASS(binder_buffer_class,

View File

@ -1,18 +1,10 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Extcon charger detection driver for Intel Cherrytrail Whiskey Cove PMIC
* Copyright (C) 2017 Hans de Goede <hdegoede@redhat.com>
*
* Based on various non upstream patches to support the CHT Whiskey Cove PMIC:
* Copyright (C) 2013-2015 Intel Corporation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*/
#include <linux/extcon-provider.h>
@ -32,10 +24,10 @@
#define CHT_WC_CHGRCTRL0_EMRGCHREN BIT(1)
#define CHT_WC_CHGRCTRL0_EXTCHRDIS BIT(2)
#define CHT_WC_CHGRCTRL0_SWCONTROL BIT(3)
#define CHT_WC_CHGRCTRL0_TTLCK_MASK BIT(4)
#define CHT_WC_CHGRCTRL0_CCSM_OFF_MASK BIT(5)
#define CHT_WC_CHGRCTRL0_DBPOFF_MASK BIT(6)
#define CHT_WC_CHGRCTRL0_WDT_NOKICK BIT(7)
#define CHT_WC_CHGRCTRL0_TTLCK BIT(4)
#define CHT_WC_CHGRCTRL0_CCSM_OFF BIT(5)
#define CHT_WC_CHGRCTRL0_DBPOFF BIT(6)
#define CHT_WC_CHGRCTRL0_CHR_WDT_NOKICK BIT(7)
#define CHT_WC_CHGRCTRL1 0x5e17
@ -52,7 +44,7 @@
#define CHT_WC_USBSRC_TYPE_ACA 4
#define CHT_WC_USBSRC_TYPE_SE1 5
#define CHT_WC_USBSRC_TYPE_MHL 6
#define CHT_WC_USBSRC_TYPE_FLOAT_DP_DN 7
#define CHT_WC_USBSRC_TYPE_FLOATING 7
#define CHT_WC_USBSRC_TYPE_OTHER 8
#define CHT_WC_USBSRC_TYPE_DCP_EXTPHY 9
@ -61,9 +53,12 @@
#define CHT_WC_PWRSRC_STS 0x6e1e
#define CHT_WC_PWRSRC_VBUS BIT(0)
#define CHT_WC_PWRSRC_DC BIT(1)
#define CHT_WC_PWRSRC_BAT BIT(2)
#define CHT_WC_PWRSRC_ID_GND BIT(3)
#define CHT_WC_PWRSRC_ID_FLOAT BIT(4)
#define CHT_WC_PWRSRC_BATT BIT(2)
#define CHT_WC_PWRSRC_USBID_MASK GENMASK(4, 3)
#define CHT_WC_PWRSRC_USBID_SHIFT 3
#define CHT_WC_PWRSRC_RID_ACA 0
#define CHT_WC_PWRSRC_RID_GND 1
#define CHT_WC_PWRSRC_RID_FLOAT 2
#define CHT_WC_VBUS_GPIO_CTLO 0x6e2d
#define CHT_WC_VBUS_GPIO_CTLO_OUTPUT BIT(0)
@ -104,16 +99,20 @@ struct cht_wc_extcon_data {
static int cht_wc_extcon_get_id(struct cht_wc_extcon_data *ext, int pwrsrc_sts)
{
if (pwrsrc_sts & CHT_WC_PWRSRC_ID_GND)
switch ((pwrsrc_sts & CHT_WC_PWRSRC_USBID_MASK) >> CHT_WC_PWRSRC_USBID_SHIFT) {
case CHT_WC_PWRSRC_RID_GND:
return USB_ID_GND;
if (pwrsrc_sts & CHT_WC_PWRSRC_ID_FLOAT)
case CHT_WC_PWRSRC_RID_FLOAT:
return USB_ID_FLOAT;
/*
* Once we have iio support for the gpadc we should read the USBID
* gpadc channel here and determine ACA role based on that.
*/
return USB_ID_FLOAT;
case CHT_WC_PWRSRC_RID_ACA:
default:
/*
* Once we have IIO support for the GPADC we should read
* the USBID GPADC channel here and determine ACA role
* based on that.
*/
return USB_ID_FLOAT;
}
}
static int cht_wc_extcon_get_charger(struct cht_wc_extcon_data *ext,
@ -156,9 +155,9 @@ static int cht_wc_extcon_get_charger(struct cht_wc_extcon_data *ext,
dev_warn(ext->dev,
"Unhandled charger type %d, defaulting to SDP\n",
ret);
/* Fall through, treat as SDP */
return EXTCON_CHG_USB_SDP;
case CHT_WC_USBSRC_TYPE_SDP:
case CHT_WC_USBSRC_TYPE_FLOAT_DP_DN:
case CHT_WC_USBSRC_TYPE_FLOATING:
case CHT_WC_USBSRC_TYPE_OTHER:
return EXTCON_CHG_USB_SDP;
case CHT_WC_USBSRC_TYPE_CDP:
@ -279,7 +278,7 @@ static int cht_wc_extcon_sw_control(struct cht_wc_extcon_data *ext, bool enable)
{
int ret, mask, val;
mask = CHT_WC_CHGRCTRL0_SWCONTROL | CHT_WC_CHGRCTRL0_CCSM_OFF_MASK;
mask = CHT_WC_CHGRCTRL0_SWCONTROL | CHT_WC_CHGRCTRL0_CCSM_OFF;
val = enable ? mask : 0;
ret = regmap_update_bits(ext->regmap, CHT_WC_CHGRCTRL0, mask, val);
if (ret)
@ -292,6 +291,7 @@ static int cht_wc_extcon_probe(struct platform_device *pdev)
{
struct intel_soc_pmic *pmic = dev_get_drvdata(pdev->dev.parent);
struct cht_wc_extcon_data *ext;
unsigned long mask = ~(CHT_WC_PWRSRC_VBUS | CHT_WC_PWRSRC_USBID_MASK);
int irq, ret;
irq = platform_get_irq(pdev, 0);
@ -352,9 +352,7 @@ static int cht_wc_extcon_probe(struct platform_device *pdev)
}
/* Unmask irqs */
ret = regmap_write(ext->regmap, CHT_WC_PWRSRC_IRQ_MASK,
(int)~(CHT_WC_PWRSRC_VBUS | CHT_WC_PWRSRC_ID_GND |
CHT_WC_PWRSRC_ID_FLOAT));
ret = regmap_write(ext->regmap, CHT_WC_PWRSRC_IRQ_MASK, mask);
if (ret) {
dev_err(ext->dev, "Error writing irq-mask: %d\n", ret);
goto disable_sw_control;

View File

@ -1,3 +1,4 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Intel INT3496 ACPI device extcon driver
*
@ -7,15 +8,6 @@
*
* Copyright (c) 2014, Intel Corporation.
* Author: David Cohen <david.a.cohen@linux.intel.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/acpi.h>
@ -192,4 +184,4 @@ module_platform_driver(int3496_driver);
MODULE_AUTHOR("Hans de Goede <hdegoede@redhat.com>");
MODULE_DESCRIPTION("Intel INT3496 ACPI device extcon driver");
MODULE_LICENSE("GPL");
MODULE_LICENSE("GPL v2");

View File

@ -1,20 +1,10 @@
/*
* extcon-max14577.c - MAX14577/77836 extcon driver to support MUIC
*
* Copyright (C) 2013,2014 Samsung Electronics
* Chanwoo Choi <cw00.choi@samsung.com>
* Krzysztof Kozlowski <krzk@kernel.org>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
// SPDX-License-Identifier: GPL-2.0+
//
// extcon-max14577.c - MAX14577/77836 extcon driver to support MUIC
//
// Copyright (C) 2013,2014 Samsung Electronics
// Chanwoo Choi <cw00.choi@samsung.com>
// Krzysztof Kozlowski <krzk@kernel.org>
#include <linux/kernel.h>
#include <linux/module.h>

View File

@ -1,19 +1,9 @@
/*
* extcon-max77693.c - MAX77693 extcon driver to support MAX77693 MUIC
*
* Copyright (C) 2012 Samsung Electrnoics
* Chanwoo Choi <cw00.choi@samsung.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
// SPDX-License-Identifier: GPL-2.0+
//
// extcon-max77693.c - MAX77693 extcon driver to support MAX77693 MUIC
//
// Copyright (C) 2012 Samsung Electrnoics
// Chanwoo Choi <cw00.choi@samsung.com>
#include <linux/kernel.h>
#include <linux/module.h>

View File

@ -1,15 +1,10 @@
/*
* extcon-max77843.c - Maxim MAX77843 extcon driver to support
* MUIC(Micro USB Interface Controller)
*
* Copyright (C) 2015 Samsung Electronics
* Author: Jaewon Kim <jaewon02.kim@samsung.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*/
// SPDX-License-Identifier: GPL-2.0+
//
// extcon-max77843.c - Maxim MAX77843 extcon driver to support
// MUIC(Micro USB Interface Controller)
//
// Copyright (C) 2015 Samsung Electronics
// Author: Jaewon Kim <jaewon02.kim@samsung.com>
#include <linux/extcon-provider.h>
#include <linux/i2c.h>

View File

@ -1,19 +1,9 @@
/*
* extcon-max8997.c - MAX8997 extcon driver to support MAX8997 MUIC
*
* Copyright (C) 2012 Samsung Electronics
* Donggeun Kim <dg77.kim@samsung.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
// SPDX-License-Identifier: GPL-2.0+
//
// extcon-max8997.c - MAX8997 extcon driver to support MAX8997 MUIC
//
// Copyright (C) 2012 Samsung Electronics
// Donggeun Kim <dg77.kim@samsung.com>
#include <linux/kernel.h>
#include <linux/module.h>

View File

@ -628,7 +628,7 @@ int extcon_get_property(struct extcon_dev *edev, unsigned int id,
unsigned long flags;
int index, ret = 0;
*prop_val = (union extcon_property_value)(0);
*prop_val = (union extcon_property_value){0};
if (!edev)
return -EINVAL;
@ -1123,7 +1123,6 @@ int extcon_dev_register(struct extcon_dev *edev)
(unsigned long)atomic_inc_return(&edev_no));
if (edev->max_supported) {
char buf[10];
char *str;
struct extcon_cable *cable;
@ -1137,9 +1136,7 @@ int extcon_dev_register(struct extcon_dev *edev)
for (index = 0; index < edev->max_supported; index++) {
cable = &edev->cables[index];
snprintf(buf, 10, "cable.%d", index);
str = kzalloc(strlen(buf) + 1,
GFP_KERNEL);
str = kasprintf(GFP_KERNEL, "cable.%d", index);
if (!str) {
for (index--; index >= 0; index--) {
cable = &edev->cables[index];
@ -1149,7 +1146,6 @@ int extcon_dev_register(struct extcon_dev *edev)
goto err_alloc_cables;
}
strcpy(str, buf);
cable->edev = edev;
cable->cable_index = index;
@ -1172,7 +1168,6 @@ int extcon_dev_register(struct extcon_dev *edev)
}
if (edev->max_supported && edev->mutually_exclusive) {
char buf[80];
char *name;
/* Count the size of mutually_exclusive array */
@ -1197,9 +1192,8 @@ int extcon_dev_register(struct extcon_dev *edev)
}
for (index = 0; edev->mutually_exclusive[index]; index++) {
sprintf(buf, "0x%x", edev->mutually_exclusive[index]);
name = kzalloc(strlen(buf) + 1,
GFP_KERNEL);
name = kasprintf(GFP_KERNEL, "0x%x",
edev->mutually_exclusive[index]);
if (!name) {
for (index--; index >= 0; index--) {
kfree(edev->d_attrs_muex[index].attr.
@ -1210,7 +1204,6 @@ int extcon_dev_register(struct extcon_dev *edev)
ret = -ENOMEM;
goto err_muex;
}
strcpy(name, buf);
sysfs_attr_init(&edev->d_attrs_muex[index].attr);
edev->d_attrs_muex[index].attr.name = name;
edev->d_attrs_muex[index].attr.mode = 0000;

View File

@ -10,37 +10,31 @@ if GOOGLE_FIRMWARE
config GOOGLE_SMI
tristate "SMI interface for Google platforms"
depends on X86 && ACPI && DMI && EFI
select EFI_VARS
depends on X86 && ACPI && DMI
help
Say Y here if you want to enable SMI callbacks for Google
platforms. This provides an interface for writing to and
clearing the EFI event log and reading and writing NVRAM
clearing the event log. If EFI_VARS is also enabled this
driver provides an interface for reading and writing NVRAM
variables.
config GOOGLE_COREBOOT_TABLE
tristate
depends on GOOGLE_COREBOOT_TABLE_ACPI || GOOGLE_COREBOOT_TABLE_OF
config GOOGLE_COREBOOT_TABLE_ACPI
tristate "Coreboot Table Access - ACPI"
depends on ACPI
select GOOGLE_COREBOOT_TABLE
tristate "Coreboot Table Access"
depends on ACPI || OF
help
This option enables the coreboot_table module, which provides other
firmware modules to access to the coreboot table. The coreboot table
pointer is accessed through the ACPI "GOOGCB00" object.
firmware modules access to the coreboot table. The coreboot table
pointer is accessed through the ACPI "GOOGCB00" object or the
device tree node /firmware/coreboot.
If unsure say N.
config GOOGLE_COREBOOT_TABLE_OF
tristate "Coreboot Table Access - Device Tree"
depends on OF
config GOOGLE_COREBOOT_TABLE_ACPI
tristate
select GOOGLE_COREBOOT_TABLE
config GOOGLE_COREBOOT_TABLE_OF
tristate
select GOOGLE_COREBOOT_TABLE
help
This option enable the coreboot_table module, which provide other
firmware modules to access coreboot table. The coreboot table pointer
is accessed through the device tree node /firmware/coreboot.
If unsure say N.
config GOOGLE_MEMCONSOLE
tristate

View File

@ -2,8 +2,6 @@
obj-$(CONFIG_GOOGLE_SMI) += gsmi.o
obj-$(CONFIG_GOOGLE_COREBOOT_TABLE) += coreboot_table.o
obj-$(CONFIG_GOOGLE_COREBOOT_TABLE_ACPI) += coreboot_table-acpi.o
obj-$(CONFIG_GOOGLE_COREBOOT_TABLE_OF) += coreboot_table-of.o
obj-$(CONFIG_GOOGLE_FRAMEBUFFER_COREBOOT) += framebuffer-coreboot.o
obj-$(CONFIG_GOOGLE_MEMCONSOLE) += memconsole.o
obj-$(CONFIG_GOOGLE_MEMCONSOLE_COREBOOT) += memconsole-coreboot.o

View File

@ -1,88 +0,0 @@
/*
* coreboot_table-acpi.c
*
* Using ACPI to locate Coreboot table and provide coreboot table access.
*
* Copyright 2017 Google Inc.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License v2.0 as published by
* the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/acpi.h>
#include <linux/device.h>
#include <linux/err.h>
#include <linux/init.h>
#include <linux/io.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/platform_device.h>
#include "coreboot_table.h"
static int coreboot_table_acpi_probe(struct platform_device *pdev)
{
phys_addr_t phyaddr;
resource_size_t len;
struct coreboot_table_header __iomem *header = NULL;
struct resource *res;
void __iomem *ptr = NULL;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (!res)
return -EINVAL;
len = resource_size(res);
if (!res->start || !len)
return -EINVAL;
phyaddr = res->start;
header = ioremap_cache(phyaddr, sizeof(*header));
if (header == NULL)
return -ENOMEM;
ptr = ioremap_cache(phyaddr,
header->header_bytes + header->table_bytes);
iounmap(header);
if (!ptr)
return -ENOMEM;
return coreboot_table_init(&pdev->dev, ptr);
}
static int coreboot_table_acpi_remove(struct platform_device *pdev)
{
return coreboot_table_exit();
}
static const struct acpi_device_id cros_coreboot_acpi_match[] = {
{ "GOOGCB00", 0 },
{ "BOOT0000", 0 },
{ }
};
MODULE_DEVICE_TABLE(acpi, cros_coreboot_acpi_match);
static struct platform_driver coreboot_table_acpi_driver = {
.probe = coreboot_table_acpi_probe,
.remove = coreboot_table_acpi_remove,
.driver = {
.name = "coreboot_table_acpi",
.acpi_match_table = ACPI_PTR(cros_coreboot_acpi_match),
},
};
static int __init coreboot_table_acpi_init(void)
{
return platform_driver_register(&coreboot_table_acpi_driver);
}
module_init(coreboot_table_acpi_init);
MODULE_AUTHOR("Google, Inc.");
MODULE_LICENSE("GPL");

View File

@ -1,82 +0,0 @@
/*
* coreboot_table-of.c
*
* Coreboot table access through open firmware.
*
* Copyright 2017 Google Inc.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License v2.0 as published by
* the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/device.h>
#include <linux/io.h>
#include <linux/module.h>
#include <linux/of_address.h>
#include <linux/of_platform.h>
#include <linux/platform_device.h>
#include "coreboot_table.h"
static int coreboot_table_of_probe(struct platform_device *pdev)
{
struct device_node *fw_dn = pdev->dev.of_node;
void __iomem *ptr;
ptr = of_iomap(fw_dn, 0);
of_node_put(fw_dn);
if (!ptr)
return -ENOMEM;
return coreboot_table_init(&pdev->dev, ptr);
}
static int coreboot_table_of_remove(struct platform_device *pdev)
{
return coreboot_table_exit();
}
static const struct of_device_id coreboot_of_match[] = {
{ .compatible = "coreboot" },
{},
};
static struct platform_driver coreboot_table_of_driver = {
.probe = coreboot_table_of_probe,
.remove = coreboot_table_of_remove,
.driver = {
.name = "coreboot_table_of",
.of_match_table = coreboot_of_match,
},
};
static int __init platform_coreboot_table_of_init(void)
{
struct platform_device *pdev;
struct device_node *of_node;
/* Limit device creation to the presence of /firmware/coreboot node */
of_node = of_find_node_by_path("/firmware/coreboot");
if (!of_node)
return -ENODEV;
if (!of_match_node(coreboot_of_match, of_node))
return -ENODEV;
pdev = of_platform_device_create(of_node, "coreboot_table_of", NULL);
if (!pdev)
return -ENODEV;
return platform_driver_register(&coreboot_table_of_driver);
}
module_init(platform_coreboot_table_of_init);
MODULE_AUTHOR("Google, Inc.");
MODULE_LICENSE("GPL");

View File

@ -16,12 +16,15 @@
* GNU General Public License for more details.
*/
#include <linux/acpi.h>
#include <linux/device.h>
#include <linux/err.h>
#include <linux/init.h>
#include <linux/io.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/platform_device.h>
#include <linux/slab.h>
#include "coreboot_table.h"
@ -29,8 +32,6 @@
#define CB_DEV(d) container_of(d, struct coreboot_device, dev)
#define CB_DRV(d) container_of(d, struct coreboot_driver, drv)
static struct coreboot_table_header __iomem *ptr_header;
static int coreboot_bus_match(struct device *dev, struct device_driver *drv)
{
struct coreboot_device *device = CB_DEV(dev);
@ -70,12 +71,6 @@ static struct bus_type coreboot_bus_type = {
.remove = coreboot_bus_remove,
};
static int __init coreboot_bus_init(void)
{
return bus_register(&coreboot_bus_type);
}
module_init(coreboot_bus_init);
static void coreboot_device_release(struct device *dev)
{
struct coreboot_device *device = CB_DEV(dev);
@ -97,62 +92,117 @@ void coreboot_driver_unregister(struct coreboot_driver *driver)
}
EXPORT_SYMBOL(coreboot_driver_unregister);
int coreboot_table_init(struct device *dev, void __iomem *ptr)
static int coreboot_table_populate(struct device *dev, void *ptr)
{
int i, ret;
void *ptr_entry;
struct coreboot_device *device;
struct coreboot_table_entry entry;
struct coreboot_table_header header;
struct coreboot_table_entry *entry;
struct coreboot_table_header *header = ptr;
ptr_header = ptr;
memcpy_fromio(&header, ptr_header, sizeof(header));
ptr_entry = ptr + header->header_bytes;
for (i = 0; i < header->table_entries; i++) {
entry = ptr_entry;
if (strncmp(header.signature, "LBIO", sizeof(header.signature))) {
pr_warn("coreboot_table: coreboot table missing or corrupt!\n");
return -ENODEV;
}
ptr_entry = (void *)ptr_header + header.header_bytes;
for (i = 0; i < header.table_entries; i++) {
memcpy_fromio(&entry, ptr_entry, sizeof(entry));
device = kzalloc(sizeof(struct device) + entry.size, GFP_KERNEL);
if (!device) {
ret = -ENOMEM;
break;
}
device = kzalloc(sizeof(struct device) + entry->size, GFP_KERNEL);
if (!device)
return -ENOMEM;
dev_set_name(&device->dev, "coreboot%d", i);
device->dev.parent = dev;
device->dev.bus = &coreboot_bus_type;
device->dev.release = coreboot_device_release;
memcpy_fromio(&device->entry, ptr_entry, entry.size);
memcpy(&device->entry, ptr_entry, entry->size);
ret = device_register(&device->dev);
if (ret) {
put_device(&device->dev);
break;
return ret;
}
ptr_entry += entry.size;
}
return ret;
}
EXPORT_SYMBOL(coreboot_table_init);
int coreboot_table_exit(void)
{
if (ptr_header) {
bus_unregister(&coreboot_bus_type);
iounmap(ptr_header);
ptr_header = NULL;
ptr_entry += entry->size;
}
return 0;
}
EXPORT_SYMBOL(coreboot_table_exit);
static int coreboot_table_probe(struct platform_device *pdev)
{
resource_size_t len;
struct coreboot_table_header *header;
struct resource *res;
struct device *dev = &pdev->dev;
void *ptr;
int ret;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (!res)
return -EINVAL;
len = resource_size(res);
if (!res->start || !len)
return -EINVAL;
/* Check just the header first to make sure things are sane */
header = memremap(res->start, sizeof(*header), MEMREMAP_WB);
if (!header)
return -ENOMEM;
len = header->header_bytes + header->table_bytes;
ret = strncmp(header->signature, "LBIO", sizeof(header->signature));
memunmap(header);
if (ret) {
dev_warn(dev, "coreboot table missing or corrupt!\n");
return -ENODEV;
}
ptr = memremap(res->start, len, MEMREMAP_WB);
if (!ptr)
return -ENOMEM;
ret = bus_register(&coreboot_bus_type);
if (!ret) {
ret = coreboot_table_populate(dev, ptr);
if (ret)
bus_unregister(&coreboot_bus_type);
}
memunmap(ptr);
return ret;
}
static int coreboot_table_remove(struct platform_device *pdev)
{
bus_unregister(&coreboot_bus_type);
return 0;
}
#ifdef CONFIG_ACPI
static const struct acpi_device_id cros_coreboot_acpi_match[] = {
{ "GOOGCB00", 0 },
{ "BOOT0000", 0 },
{ }
};
MODULE_DEVICE_TABLE(acpi, cros_coreboot_acpi_match);
#endif
#ifdef CONFIG_OF
static const struct of_device_id coreboot_of_match[] = {
{ .compatible = "coreboot" },
{}
};
MODULE_DEVICE_TABLE(of, coreboot_of_match);
#endif
static struct platform_driver coreboot_table_driver = {
.probe = coreboot_table_probe,
.remove = coreboot_table_remove,
.driver = {
.name = "coreboot_table",
.acpi_match_table = ACPI_PTR(cros_coreboot_acpi_match),
.of_match_table = of_match_ptr(coreboot_of_match),
},
};
module_platform_driver(coreboot_table_driver);
MODULE_AUTHOR("Google, Inc.");
MODULE_LICENSE("GPL");

View File

@ -91,10 +91,4 @@ int coreboot_driver_register(struct coreboot_driver *driver);
/* Unregister a driver that uses the data from a coreboot table. */
void coreboot_driver_unregister(struct coreboot_driver *driver);
/* Initialize coreboot table module given a pointer to iomem */
int coreboot_table_init(struct device *dev, void __iomem *ptr);
/* Cleanup coreboot table module */
int coreboot_table_exit(void);
#endif /* __COREBOOT_TABLE_H */

View File

@ -29,6 +29,7 @@
#include <linux/efi.h>
#include <linux/module.h>
#include <linux/ucs2_string.h>
#include <linux/suspend.h>
#define GSMI_SHUTDOWN_CLEAN 0 /* Clean Shutdown */
/* TODO(mikew@google.com): Tie in HARDLOCKUP_DETECTOR with NMIWDT */
@ -70,6 +71,8 @@
#define GSMI_CMD_SET_NVRAM_VAR 0x03
#define GSMI_CMD_SET_EVENT_LOG 0x08
#define GSMI_CMD_CLEAR_EVENT_LOG 0x09
#define GSMI_CMD_LOG_S0IX_SUSPEND 0x0a
#define GSMI_CMD_LOG_S0IX_RESUME 0x0b
#define GSMI_CMD_CLEAR_CONFIG 0x20
#define GSMI_CMD_HANDSHAKE_TYPE 0xC1
@ -84,7 +87,7 @@ struct gsmi_buf {
u32 address; /* physical address of buffer */
};
struct gsmi_device {
static struct gsmi_device {
struct platform_device *pdev; /* platform device */
struct gsmi_buf *name_buf; /* variable name buffer */
struct gsmi_buf *data_buf; /* generic data buffer */
@ -122,7 +125,6 @@ struct gsmi_log_entry_type_1 {
u32 instance;
} __packed;
/*
* Some platforms don't have explicit SMI handshake
* and need to wait for SMI to complete.
@ -133,6 +135,15 @@ module_param(spincount, uint, 0600);
MODULE_PARM_DESC(spincount,
"The number of loop iterations to use when using the spin handshake.");
/*
* Platforms might not support S0ix logging in their GSMI handlers. In order to
* avoid any side-effects of generating an SMI for S0ix logging, use the S0ix
* related GSMI commands only for those platforms that explicitly enable this
* option.
*/
static bool s0ix_logging_enable;
module_param(s0ix_logging_enable, bool, 0600);
static struct gsmi_buf *gsmi_buf_alloc(void)
{
struct gsmi_buf *smibuf;
@ -289,6 +300,10 @@ static int gsmi_exec(u8 func, u8 sub)
return rc;
}
#ifdef CONFIG_EFI_VARS
static struct efivars efivars;
static efi_status_t gsmi_get_variable(efi_char16_t *name,
efi_guid_t *vendor, u32 *attr,
unsigned long *data_size,
@ -466,6 +481,8 @@ static const struct efivar_operations efivar_ops = {
.get_next_variable = gsmi_get_next_variable,
};
#endif /* CONFIG_EFI_VARS */
static ssize_t eventlog_write(struct file *filp, struct kobject *kobj,
struct bin_attribute *bin_attr,
char *buf, loff_t pos, size_t count)
@ -480,11 +497,10 @@ static ssize_t eventlog_write(struct file *filp, struct kobject *kobj,
if (count < sizeof(u32))
return -EINVAL;
param.type = *(u32 *)buf;
count -= sizeof(u32);
buf += sizeof(u32);
/* The remaining buffer is the data payload */
if (count > gsmi_dev.data_buf->length)
if ((count - sizeof(u32)) > gsmi_dev.data_buf->length)
return -EINVAL;
param.data_len = count - sizeof(u32);
@ -504,7 +520,7 @@ static ssize_t eventlog_write(struct file *filp, struct kobject *kobj,
spin_unlock_irqrestore(&gsmi_dev.lock, flags);
return rc;
return (rc == 0) ? count : rc;
}
@ -716,6 +732,12 @@ static const struct dmi_system_id gsmi_dmi_table[] __initconst = {
DMI_MATCH(DMI_BOARD_VENDOR, "Google, Inc."),
},
},
{
.ident = "Coreboot Firmware",
.matches = {
DMI_MATCH(DMI_BIOS_VENDOR, "coreboot"),
},
},
{}
};
MODULE_DEVICE_TABLE(dmi, gsmi_dmi_table);
@ -762,7 +784,6 @@ static __init int gsmi_system_valid(void)
}
static struct kobject *gsmi_kobj;
static struct efivars efivars;
static const struct platform_device_info gsmi_dev_info = {
.name = "gsmi",
@ -771,6 +792,78 @@ static const struct platform_device_info gsmi_dev_info = {
.dma_mask = DMA_BIT_MASK(32),
};
#ifdef CONFIG_PM
static void gsmi_log_s0ix_info(u8 cmd)
{
unsigned long flags;
/*
* If platform has not enabled S0ix logging, then no action is
* necessary.
*/
if (!s0ix_logging_enable)
return;
spin_lock_irqsave(&gsmi_dev.lock, flags);
memset(gsmi_dev.param_buf->start, 0, gsmi_dev.param_buf->length);
gsmi_exec(GSMI_CALLBACK, cmd);
spin_unlock_irqrestore(&gsmi_dev.lock, flags);
}
static int gsmi_log_s0ix_suspend(struct device *dev)
{
/*
* If system is not suspending via firmware using the standard ACPI Sx
* types, then make a GSMI call to log the suspend info.
*/
if (!pm_suspend_via_firmware())
gsmi_log_s0ix_info(GSMI_CMD_LOG_S0IX_SUSPEND);
/*
* Always return success, since we do not want suspend
* to fail just because of logging failure.
*/
return 0;
}
static int gsmi_log_s0ix_resume(struct device *dev)
{
/*
* If system did not resume via firmware, then make a GSMI call to log
* the resume info and wake source.
*/
if (!pm_resume_via_firmware())
gsmi_log_s0ix_info(GSMI_CMD_LOG_S0IX_RESUME);
/*
* Always return success, since we do not want resume
* to fail just because of logging failure.
*/
return 0;
}
static const struct dev_pm_ops gsmi_pm_ops = {
.suspend_noirq = gsmi_log_s0ix_suspend,
.resume_noirq = gsmi_log_s0ix_resume,
};
static int gsmi_platform_driver_probe(struct platform_device *dev)
{
return 0;
}
static struct platform_driver gsmi_driver_info = {
.driver = {
.name = "gsmi",
.pm = &gsmi_pm_ops,
},
.probe = gsmi_platform_driver_probe,
};
#endif
static __init int gsmi_init(void)
{
unsigned long flags;
@ -782,6 +875,14 @@ static __init int gsmi_init(void)
gsmi_dev.smi_cmd = acpi_gbl_FADT.smi_command;
#ifdef CONFIG_PM
ret = platform_driver_register(&gsmi_driver_info);
if (unlikely(ret)) {
printk(KERN_ERR "gsmi: unable to register platform driver\n");
return ret;
}
#endif
/* register device */
gsmi_dev.pdev = platform_device_register_full(&gsmi_dev_info);
if (IS_ERR(gsmi_dev.pdev)) {
@ -886,11 +987,14 @@ static __init int gsmi_init(void)
goto out_remove_bin_file;
}
#ifdef CONFIG_EFI_VARS
ret = efivars_register(&efivars, &efivar_ops, gsmi_kobj);
if (ret) {
printk(KERN_INFO "gsmi: Failed to register efivars\n");
goto out_remove_sysfs_files;
sysfs_remove_files(gsmi_kobj, gsmi_attrs);
goto out_remove_bin_file;
}
#endif
register_reboot_notifier(&gsmi_reboot_notifier);
register_die_notifier(&gsmi_die_notifier);
@ -901,8 +1005,6 @@ static __init int gsmi_init(void)
return 0;
out_remove_sysfs_files:
sysfs_remove_files(gsmi_kobj, gsmi_attrs);
out_remove_bin_file:
sysfs_remove_bin_file(gsmi_kobj, &eventlog_bin_attr);
out_err:
@ -922,7 +1024,9 @@ static void __exit gsmi_exit(void)
unregister_die_notifier(&gsmi_die_notifier);
atomic_notifier_chain_unregister(&panic_notifier_list,
&gsmi_panic_notifier);
#ifdef CONFIG_EFI_VARS
efivars_unregister(&efivars);
#endif
sysfs_remove_files(gsmi_kobj, gsmi_attrs);
sysfs_remove_bin_file(gsmi_kobj, &eventlog_bin_attr);

View File

@ -198,7 +198,7 @@ static int vpd_section_init(const char *name, struct vpd_section *sec,
sec->name = name;
/* We want to export the raw partion with name ${name}_raw */
/* We want to export the raw partition with name ${name}_raw */
sec->raw_name = kasprintf(GFP_KERNEL, "%s_raw", name);
if (!sec->raw_name) {
err = -ENOMEM;

View File

@ -453,8 +453,8 @@ static int altera_cvp_probe(struct pci_dev *pdev,
snprintf(conf->mgr_name, sizeof(conf->mgr_name), "%s @%s",
ALTERA_CVP_MGR_NAME, pci_name(pdev));
mgr = fpga_mgr_create(&pdev->dev, conf->mgr_name,
&altera_cvp_ops, conf);
mgr = devm_fpga_mgr_create(&pdev->dev, conf->mgr_name,
&altera_cvp_ops, conf);
if (!mgr) {
ret = -ENOMEM;
goto err_unmap;
@ -463,10 +463,8 @@ static int altera_cvp_probe(struct pci_dev *pdev,
pci_set_drvdata(pdev, mgr);
ret = fpga_mgr_register(mgr);
if (ret) {
fpga_mgr_free(mgr);
if (ret)
goto err_unmap;
}
ret = driver_create_file(&altera_cvp_driver.driver,
&driver_attr_chkcfg);

View File

@ -121,18 +121,16 @@ static int alt_fpga_bridge_probe(struct platform_device *pdev)
/* Get f2s bridge configuration saved in handoff register */
regmap_read(sysmgr, SYSMGR_ISWGRP_HANDOFF3, &priv->mask);
br = fpga_bridge_create(dev, F2S_BRIDGE_NAME,
&altera_fpga2sdram_br_ops, priv);
br = devm_fpga_bridge_create(dev, F2S_BRIDGE_NAME,
&altera_fpga2sdram_br_ops, priv);
if (!br)
return -ENOMEM;
platform_set_drvdata(pdev, br);
ret = fpga_bridge_register(br);
if (ret) {
fpga_bridge_free(br);
if (ret)
return ret;
}
dev_info(dev, "driver initialized with handoff %08x\n", priv->mask);

View File

@ -213,7 +213,6 @@ static int altera_freeze_br_probe(struct platform_device *pdev)
struct fpga_bridge *br;
struct resource *res;
u32 status, revision;
int ret;
if (!np)
return -ENODEV;
@ -245,20 +244,14 @@ static int altera_freeze_br_probe(struct platform_device *pdev)
priv->base_addr = base_addr;
br = fpga_bridge_create(dev, FREEZE_BRIDGE_NAME,
&altera_freeze_br_br_ops, priv);
br = devm_fpga_bridge_create(dev, FREEZE_BRIDGE_NAME,
&altera_freeze_br_br_ops, priv);
if (!br)
return -ENOMEM;
platform_set_drvdata(pdev, br);
ret = fpga_bridge_register(br);
if (ret) {
fpga_bridge_free(br);
return ret;
}
return 0;
return fpga_bridge_register(br);
}
static int altera_freeze_br_remove(struct platform_device *pdev)

View File

@ -180,7 +180,8 @@ static int alt_fpga_bridge_probe(struct platform_device *pdev)
}
}
br = fpga_bridge_create(dev, priv->name, &altera_hps2fpga_br_ops, priv);
br = devm_fpga_bridge_create(dev, priv->name,
&altera_hps2fpga_br_ops, priv);
if (!br) {
ret = -ENOMEM;
goto err;
@ -190,12 +191,10 @@ static int alt_fpga_bridge_probe(struct platform_device *pdev)
ret = fpga_bridge_register(br);
if (ret)
goto err_free;
goto err;
return 0;
err_free:
fpga_bridge_free(br);
err:
clk_disable_unprepare(priv->clk);

View File

@ -177,7 +177,6 @@ int alt_pr_register(struct device *dev, void __iomem *reg_base)
{
struct alt_pr_priv *priv;
struct fpga_manager *mgr;
int ret;
u32 val;
priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
@ -192,17 +191,13 @@ int alt_pr_register(struct device *dev, void __iomem *reg_base)
(val & ALT_PR_CSR_STATUS_MSK) >> ALT_PR_CSR_STATUS_SFT,
(int)(val & ALT_PR_CSR_PR_START));
mgr = fpga_mgr_create(dev, dev_name(dev), &alt_pr_ops, priv);
mgr = devm_fpga_mgr_create(dev, dev_name(dev), &alt_pr_ops, priv);
if (!mgr)
return -ENOMEM;
dev_set_drvdata(dev, mgr);
ret = fpga_mgr_register(mgr);
if (ret)
fpga_mgr_free(mgr);
return ret;
return fpga_mgr_register(mgr);
}
EXPORT_SYMBOL_GPL(alt_pr_register);

View File

@ -239,7 +239,6 @@ static int altera_ps_probe(struct spi_device *spi)
struct altera_ps_conf *conf;
const struct of_device_id *of_id;
struct fpga_manager *mgr;
int ret;
conf = devm_kzalloc(&spi->dev, sizeof(*conf), GFP_KERNEL);
if (!conf)
@ -275,18 +274,14 @@ static int altera_ps_probe(struct spi_device *spi)
snprintf(conf->mgr_name, sizeof(conf->mgr_name), "%s %s",
dev_driver_string(&spi->dev), dev_name(&spi->dev));
mgr = fpga_mgr_create(&spi->dev, conf->mgr_name,
&altera_ps_ops, conf);
mgr = devm_fpga_mgr_create(&spi->dev, conf->mgr_name,
&altera_ps_ops, conf);
if (!mgr)
return -ENOMEM;
spi_set_drvdata(spi, mgr);
ret = fpga_mgr_register(mgr);
if (ret)
fpga_mgr_free(mgr);
return ret;
return fpga_mgr_register(mgr);
}
static int altera_ps_remove(struct spi_device *spi)

View File

@ -70,7 +70,7 @@ static int afu_dma_adjust_locked_vm(struct device *dev, long npages, bool incr)
dev_dbg(dev, "[%d] RLIMIT_MEMLOCK %c%ld %ld/%ld%s\n", current->pid,
incr ? '+' : '-', npages << PAGE_SHIFT,
current->mm->locked_vm << PAGE_SHIFT, rlimit(RLIMIT_MEMLOCK),
ret ? "- execeeded" : "");
ret ? "- exceeded" : "");
up_write(&current->mm->mmap_sem);

View File

@ -61,7 +61,6 @@ static int fme_br_probe(struct platform_device *pdev)
struct device *dev = &pdev->dev;
struct fme_br_priv *priv;
struct fpga_bridge *br;
int ret;
priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
if (!priv)
@ -69,18 +68,14 @@ static int fme_br_probe(struct platform_device *pdev)
priv->pdata = dev_get_platdata(dev);
br = fpga_bridge_create(dev, "DFL FPGA FME Bridge",
&fme_bridge_ops, priv);
br = devm_fpga_bridge_create(dev, "DFL FPGA FME Bridge",
&fme_bridge_ops, priv);
if (!br)
return -ENOMEM;
platform_set_drvdata(pdev, br);
ret = fpga_bridge_register(br);
if (ret)
fpga_bridge_free(br);
return ret;
return fpga_bridge_register(br);
}
static int fme_br_remove(struct platform_device *pdev)

View File

@ -201,7 +201,7 @@ static int fme_mgr_write(struct fpga_manager *mgr,
}
if (count < 4) {
dev_err(dev, "Invaild PR bitstream size\n");
dev_err(dev, "Invalid PR bitstream size\n");
return -EINVAL;
}
@ -287,7 +287,6 @@ static int fme_mgr_probe(struct platform_device *pdev)
struct fme_mgr_priv *priv;
struct fpga_manager *mgr;
struct resource *res;
int ret;
priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
if (!priv)
@ -309,19 +308,15 @@ static int fme_mgr_probe(struct platform_device *pdev)
fme_mgr_get_compat_id(priv->ioaddr, compat_id);
mgr = fpga_mgr_create(dev, "DFL FME FPGA Manager",
&fme_mgr_ops, priv);
mgr = devm_fpga_mgr_create(dev, "DFL FME FPGA Manager",
&fme_mgr_ops, priv);
if (!mgr)
return -ENOMEM;
mgr->compat_id = compat_id;
platform_set_drvdata(pdev, mgr);
ret = fpga_mgr_register(mgr);
if (ret)
fpga_mgr_free(mgr);
return ret;
return fpga_mgr_register(mgr);
}
static int fme_mgr_remove(struct platform_device *pdev)

View File

@ -39,7 +39,7 @@ static int fme_region_probe(struct platform_device *pdev)
if (IS_ERR(mgr))
return -EPROBE_DEFER;
region = fpga_region_create(dev, mgr, fme_region_get_bridges);
region = devm_fpga_region_create(dev, mgr, fme_region_get_bridges);
if (!region) {
ret = -ENOMEM;
goto eprobe_mgr_put;
@ -51,14 +51,12 @@ static int fme_region_probe(struct platform_device *pdev)
ret = fpga_region_register(region);
if (ret)
goto region_free;
goto eprobe_mgr_put;
dev_dbg(dev, "DFL FME FPGA Region probed\n");
return 0;
region_free:
fpga_region_free(region);
eprobe_mgr_put:
fpga_mgr_put(mgr);
return ret;

View File

@ -899,7 +899,7 @@ dfl_fpga_feature_devs_enumerate(struct dfl_fpga_enum_info *info)
if (!cdev)
return ERR_PTR(-ENOMEM);
cdev->region = fpga_region_create(info->dev, NULL, NULL);
cdev->region = devm_fpga_region_create(info->dev, NULL, NULL);
if (!cdev->region) {
ret = -ENOMEM;
goto free_cdev_exit;
@ -911,7 +911,7 @@ dfl_fpga_feature_devs_enumerate(struct dfl_fpga_enum_info *info)
ret = fpga_region_register(cdev->region);
if (ret)
goto free_region_exit;
goto free_cdev_exit;
/* create and init build info for enumeration */
binfo = devm_kzalloc(info->dev, sizeof(*binfo), GFP_KERNEL);
@ -942,8 +942,6 @@ dfl_fpga_feature_devs_enumerate(struct dfl_fpga_enum_info *info)
unregister_region_exit:
fpga_region_unregister(cdev->region);
free_region_exit:
fpga_region_free(cdev->region);
free_cdev_exit:
devm_kfree(info->dev, cdev);
return ERR_PTR(ret);

View File

@ -324,6 +324,9 @@ ATTRIBUTE_GROUPS(fpga_bridge);
* @br_ops: pointer to structure of fpga bridge ops
* @priv: FPGA bridge private data
*
* The caller of this function is responsible for freeing the bridge with
* fpga_bridge_free(). Using devm_fpga_bridge_create() instead is recommended.
*
* Return: struct fpga_bridge or NULL
*/
struct fpga_bridge *fpga_bridge_create(struct device *dev, const char *name,
@ -378,8 +381,8 @@ error_kfree:
EXPORT_SYMBOL_GPL(fpga_bridge_create);
/**
* fpga_bridge_free - free a fpga bridge and its id
* @bridge: FPGA bridge struct created by fpga_bridge_create
* fpga_bridge_free - free a fpga bridge created by fpga_bridge_create()
* @bridge: FPGA bridge struct
*/
void fpga_bridge_free(struct fpga_bridge *bridge)
{
@ -388,9 +391,56 @@ void fpga_bridge_free(struct fpga_bridge *bridge)
}
EXPORT_SYMBOL_GPL(fpga_bridge_free);
static void devm_fpga_bridge_release(struct device *dev, void *res)
{
struct fpga_bridge *bridge = *(struct fpga_bridge **)res;
fpga_bridge_free(bridge);
}
/**
* fpga_bridge_register - register a fpga bridge
* @bridge: FPGA bridge struct created by fpga_bridge_create
* devm_fpga_bridge_create - create and init a managed struct fpga_bridge
* @dev: FPGA bridge device from pdev
* @name: FPGA bridge name
* @br_ops: pointer to structure of fpga bridge ops
* @priv: FPGA bridge private data
*
* This function is intended for use in a FPGA bridge driver's probe function.
* After the bridge driver creates the struct with devm_fpga_bridge_create(), it
* should register the bridge with fpga_bridge_register(). The bridge driver's
* remove function should call fpga_bridge_unregister(). The bridge struct
* allocated with this function will be freed automatically on driver detach.
* This includes the case of a probe function returning error before calling
* fpga_bridge_register(), the struct will still get cleaned up.
*
* Return: struct fpga_bridge or NULL
*/
struct fpga_bridge
*devm_fpga_bridge_create(struct device *dev, const char *name,
const struct fpga_bridge_ops *br_ops, void *priv)
{
struct fpga_bridge **ptr, *bridge;
ptr = devres_alloc(devm_fpga_bridge_release, sizeof(*ptr), GFP_KERNEL);
if (!ptr)
return NULL;
bridge = fpga_bridge_create(dev, name, br_ops, priv);
if (!bridge) {
devres_free(ptr);
} else {
*ptr = bridge;
devres_add(dev, ptr);
}
return bridge;
}
EXPORT_SYMBOL_GPL(devm_fpga_bridge_create);
/**
* fpga_bridge_register - register a FPGA bridge
*
* @bridge: FPGA bridge struct
*
* Return: 0 for success, error code otherwise.
*/
@ -412,8 +462,11 @@ int fpga_bridge_register(struct fpga_bridge *bridge)
EXPORT_SYMBOL_GPL(fpga_bridge_register);
/**
* fpga_bridge_unregister - unregister and free a fpga bridge
* @bridge: FPGA bridge struct created by fpga_bridge_create
* fpga_bridge_unregister - unregister a FPGA bridge
*
* @bridge: FPGA bridge struct
*
* This function is intended for use in a FPGA bridge driver's remove function.
*/
void fpga_bridge_unregister(struct fpga_bridge *bridge)
{
@ -430,9 +483,6 @@ EXPORT_SYMBOL_GPL(fpga_bridge_unregister);
static void fpga_bridge_dev_release(struct device *dev)
{
struct fpga_bridge *bridge = to_fpga_bridge(dev);
fpga_bridge_free(bridge);
}
static int __init fpga_bridge_dev_init(void)

View File

@ -558,6 +558,9 @@ EXPORT_SYMBOL_GPL(fpga_mgr_unlock);
* @mops: pointer to structure of fpga manager ops
* @priv: fpga manager private data
*
* The caller of this function is responsible for freeing the struct with
* fpga_mgr_free(). Using devm_fpga_mgr_create() instead is recommended.
*
* Return: pointer to struct fpga_manager or NULL
*/
struct fpga_manager *fpga_mgr_create(struct device *dev, const char *name,
@ -618,8 +621,8 @@ error_kfree:
EXPORT_SYMBOL_GPL(fpga_mgr_create);
/**
* fpga_mgr_free - deallocate a FPGA manager
* @mgr: fpga manager struct created by fpga_mgr_create
* fpga_mgr_free - free a FPGA manager created with fpga_mgr_create()
* @mgr: fpga manager struct
*/
void fpga_mgr_free(struct fpga_manager *mgr)
{
@ -628,9 +631,55 @@ void fpga_mgr_free(struct fpga_manager *mgr)
}
EXPORT_SYMBOL_GPL(fpga_mgr_free);
static void devm_fpga_mgr_release(struct device *dev, void *res)
{
struct fpga_manager *mgr = *(struct fpga_manager **)res;
fpga_mgr_free(mgr);
}
/**
* devm_fpga_mgr_create - create and initialize a managed FPGA manager struct
* @dev: fpga manager device from pdev
* @name: fpga manager name
* @mops: pointer to structure of fpga manager ops
* @priv: fpga manager private data
*
* This function is intended for use in a FPGA manager driver's probe function.
* After the manager driver creates the manager struct with
* devm_fpga_mgr_create(), it should register it with fpga_mgr_register(). The
* manager driver's remove function should call fpga_mgr_unregister(). The
* manager struct allocated with this function will be freed automatically on
* driver detach. This includes the case of a probe function returning error
* before calling fpga_mgr_register(), the struct will still get cleaned up.
*
* Return: pointer to struct fpga_manager or NULL
*/
struct fpga_manager *devm_fpga_mgr_create(struct device *dev, const char *name,
const struct fpga_manager_ops *mops,
void *priv)
{
struct fpga_manager **ptr, *mgr;
ptr = devres_alloc(devm_fpga_mgr_release, sizeof(*ptr), GFP_KERNEL);
if (!ptr)
return NULL;
mgr = fpga_mgr_create(dev, name, mops, priv);
if (!mgr) {
devres_free(ptr);
} else {
*ptr = mgr;
devres_add(dev, ptr);
}
return mgr;
}
EXPORT_SYMBOL_GPL(devm_fpga_mgr_create);
/**
* fpga_mgr_register - register a FPGA manager
* @mgr: fpga manager struct created by fpga_mgr_create
* @mgr: fpga manager struct
*
* Return: 0 on success, negative error code otherwise.
*/
@ -661,8 +710,10 @@ error_device:
EXPORT_SYMBOL_GPL(fpga_mgr_register);
/**
* fpga_mgr_unregister - unregister and free a FPGA manager
* @mgr: fpga manager struct
* fpga_mgr_unregister - unregister a FPGA manager
* @mgr: fpga manager struct
*
* This function is intended for use in a FPGA manager driver's remove function.
*/
void fpga_mgr_unregister(struct fpga_manager *mgr)
{
@ -681,9 +732,6 @@ EXPORT_SYMBOL_GPL(fpga_mgr_unregister);
static void fpga_mgr_dev_release(struct device *dev)
{
struct fpga_manager *mgr = to_fpga_manager(dev);
fpga_mgr_free(mgr);
}
static int __init fpga_mgr_class_init(void)

View File

@ -185,6 +185,10 @@ ATTRIBUTE_GROUPS(fpga_region);
* @mgr: manager that programs this region
* @get_bridges: optional function to get bridges to a list
*
* The caller of this function is responsible for freeing the resulting region
* struct with fpga_region_free(). Using devm_fpga_region_create() instead is
* recommended.
*
* Return: struct fpga_region or NULL
*/
struct fpga_region
@ -230,8 +234,8 @@ err_free:
EXPORT_SYMBOL_GPL(fpga_region_create);
/**
* fpga_region_free - free a struct fpga_region
* @region: FPGA region created by fpga_region_create
* fpga_region_free - free a FPGA region created by fpga_region_create()
* @region: FPGA region
*/
void fpga_region_free(struct fpga_region *region)
{
@ -240,21 +244,69 @@ void fpga_region_free(struct fpga_region *region)
}
EXPORT_SYMBOL_GPL(fpga_region_free);
static void devm_fpga_region_release(struct device *dev, void *res)
{
struct fpga_region *region = *(struct fpga_region **)res;
fpga_region_free(region);
}
/**
* devm_fpga_region_create - create and initialize a managed FPGA region struct
* @dev: device parent
* @mgr: manager that programs this region
* @get_bridges: optional function to get bridges to a list
*
* This function is intended for use in a FPGA region driver's probe function.
* After the region driver creates the region struct with
* devm_fpga_region_create(), it should register it with fpga_region_register().
* The region driver's remove function should call fpga_region_unregister().
* The region struct allocated with this function will be freed automatically on
* driver detach. This includes the case of a probe function returning error
* before calling fpga_region_register(), the struct will still get cleaned up.
*
* Return: struct fpga_region or NULL
*/
struct fpga_region
*devm_fpga_region_create(struct device *dev,
struct fpga_manager *mgr,
int (*get_bridges)(struct fpga_region *))
{
struct fpga_region **ptr, *region;
ptr = devres_alloc(devm_fpga_region_release, sizeof(*ptr), GFP_KERNEL);
if (!ptr)
return NULL;
region = fpga_region_create(dev, mgr, get_bridges);
if (!region) {
devres_free(ptr);
} else {
*ptr = region;
devres_add(dev, ptr);
}
return region;
}
EXPORT_SYMBOL_GPL(devm_fpga_region_create);
/**
* fpga_region_register - register a FPGA region
* @region: FPGA region created by fpga_region_create
* @region: FPGA region
*
* Return: 0 or -errno
*/
int fpga_region_register(struct fpga_region *region)
{
return device_add(&region->dev);
}
EXPORT_SYMBOL_GPL(fpga_region_register);
/**
* fpga_region_unregister - unregister and free a FPGA region
* fpga_region_unregister - unregister a FPGA region
* @region: FPGA region
*
* This function is intended for use in a FPGA region driver's remove function.
*/
void fpga_region_unregister(struct fpga_region *region)
{
@ -264,9 +316,6 @@ EXPORT_SYMBOL_GPL(fpga_region_unregister);
static void fpga_region_dev_release(struct device *dev)
{
struct fpga_region *region = to_fpga_region(dev);
fpga_region_free(region);
}
/**

View File

@ -175,18 +175,14 @@ static int ice40_fpga_probe(struct spi_device *spi)
return ret;
}
mgr = fpga_mgr_create(dev, "Lattice iCE40 FPGA Manager",
&ice40_fpga_ops, priv);
mgr = devm_fpga_mgr_create(dev, "Lattice iCE40 FPGA Manager",
&ice40_fpga_ops, priv);
if (!mgr)
return -ENOMEM;
spi_set_drvdata(spi, mgr);
ret = fpga_mgr_register(mgr);
if (ret)
fpga_mgr_free(mgr);
return ret;
return fpga_mgr_register(mgr);
}
static int ice40_fpga_remove(struct spi_device *spi)

View File

@ -356,25 +356,20 @@ static int machxo2_spi_probe(struct spi_device *spi)
{
struct device *dev = &spi->dev;
struct fpga_manager *mgr;
int ret;
if (spi->max_speed_hz > MACHXO2_MAX_SPEED) {
dev_err(dev, "Speed is too high\n");
return -EINVAL;
}
mgr = fpga_mgr_create(dev, "Lattice MachXO2 SPI FPGA Manager",
&machxo2_ops, spi);
mgr = devm_fpga_mgr_create(dev, "Lattice MachXO2 SPI FPGA Manager",
&machxo2_ops, spi);
if (!mgr)
return -ENOMEM;
spi_set_drvdata(spi, mgr);
ret = fpga_mgr_register(mgr);
if (ret)
fpga_mgr_free(mgr);
return ret;
return fpga_mgr_register(mgr);
}
static int machxo2_spi_remove(struct spi_device *spi)

View File

@ -410,7 +410,7 @@ static int of_fpga_region_probe(struct platform_device *pdev)
if (IS_ERR(mgr))
return -EPROBE_DEFER;
region = fpga_region_create(dev, mgr, of_fpga_region_get_bridges);
region = devm_fpga_region_create(dev, mgr, of_fpga_region_get_bridges);
if (!region) {
ret = -ENOMEM;
goto eprobe_mgr_put;
@ -418,7 +418,7 @@ static int of_fpga_region_probe(struct platform_device *pdev)
ret = fpga_region_register(region);
if (ret)
goto eprobe_free;
goto eprobe_mgr_put;
of_platform_populate(np, fpga_region_of_match, NULL, &region->dev);
dev_set_drvdata(dev, region);
@ -427,8 +427,6 @@ static int of_fpga_region_probe(struct platform_device *pdev)
return 0;
eprobe_free:
fpga_region_free(region);
eprobe_mgr_put:
fpga_mgr_put(mgr);
return ret;

View File

@ -508,8 +508,8 @@ static int socfpga_a10_fpga_probe(struct platform_device *pdev)
return -EBUSY;
}
mgr = fpga_mgr_create(dev, "SoCFPGA Arria10 FPGA Manager",
&socfpga_a10_fpga_mgr_ops, priv);
mgr = devm_fpga_mgr_create(dev, "SoCFPGA Arria10 FPGA Manager",
&socfpga_a10_fpga_mgr_ops, priv);
if (!mgr)
return -ENOMEM;
@ -517,7 +517,6 @@ static int socfpga_a10_fpga_probe(struct platform_device *pdev)
ret = fpga_mgr_register(mgr);
if (ret) {
fpga_mgr_free(mgr);
clk_disable_unprepare(priv->clk);
return ret;
}

View File

@ -571,18 +571,14 @@ static int socfpga_fpga_probe(struct platform_device *pdev)
if (ret)
return ret;
mgr = fpga_mgr_create(dev, "Altera SOCFPGA FPGA Manager",
&socfpga_fpga_ops, priv);
mgr = devm_fpga_mgr_create(dev, "Altera SOCFPGA FPGA Manager",
&socfpga_fpga_ops, priv);
if (!mgr)
return -ENOMEM;
platform_set_drvdata(pdev, mgr);
ret = fpga_mgr_register(mgr);
if (ret)
fpga_mgr_free(mgr);
return ret;
return fpga_mgr_register(mgr);
}
static int socfpga_fpga_remove(struct platform_device *pdev)

View File

@ -118,7 +118,6 @@ static int ts73xx_fpga_probe(struct platform_device *pdev)
struct ts73xx_fpga_priv *priv;
struct fpga_manager *mgr;
struct resource *res;
int ret;
priv = devm_kzalloc(kdev, sizeof(*priv), GFP_KERNEL);
if (!priv)
@ -133,18 +132,14 @@ static int ts73xx_fpga_probe(struct platform_device *pdev)
return PTR_ERR(priv->io_base);
}
mgr = fpga_mgr_create(kdev, "TS-73xx FPGA Manager",
&ts73xx_fpga_ops, priv);
mgr = devm_fpga_mgr_create(kdev, "TS-73xx FPGA Manager",
&ts73xx_fpga_ops, priv);
if (!mgr)
return -ENOMEM;
platform_set_drvdata(pdev, mgr);
ret = fpga_mgr_register(mgr);
if (ret)
fpga_mgr_free(mgr);
return ret;
return fpga_mgr_register(mgr);
}
static int ts73xx_fpga_remove(struct platform_device *pdev)

View File

@ -121,8 +121,8 @@ static int xlnx_pr_decoupler_probe(struct platform_device *pdev)
clk_disable(priv->clk);
br = fpga_bridge_create(&pdev->dev, "Xilinx PR Decoupler",
&xlnx_pr_decoupler_br_ops, priv);
br = devm_fpga_bridge_create(&pdev->dev, "Xilinx PR Decoupler",
&xlnx_pr_decoupler_br_ops, priv);
if (!br) {
err = -ENOMEM;
goto err_clk;

View File

@ -144,7 +144,6 @@ static int xilinx_spi_probe(struct spi_device *spi)
{
struct xilinx_spi_conf *conf;
struct fpga_manager *mgr;
int ret;
conf = devm_kzalloc(&spi->dev, sizeof(*conf), GFP_KERNEL);
if (!conf)
@ -167,18 +166,15 @@ static int xilinx_spi_probe(struct spi_device *spi)
return PTR_ERR(conf->done);
}
mgr = fpga_mgr_create(&spi->dev, "Xilinx Slave Serial FPGA Manager",
&xilinx_spi_ops, conf);
mgr = devm_fpga_mgr_create(&spi->dev,
"Xilinx Slave Serial FPGA Manager",
&xilinx_spi_ops, conf);
if (!mgr)
return -ENOMEM;
spi_set_drvdata(spi, mgr);
ret = fpga_mgr_register(mgr);
if (ret)
fpga_mgr_free(mgr);
return ret;
return fpga_mgr_register(mgr);
}
static int xilinx_spi_remove(struct spi_device *spi)

View File

@ -614,8 +614,8 @@ static int zynq_fpga_probe(struct platform_device *pdev)
clk_disable(priv->clk);
mgr = fpga_mgr_create(dev, "Xilinx Zynq FPGA Manager",
&zynq_fpga_ops, priv);
mgr = devm_fpga_mgr_create(dev, "Xilinx Zynq FPGA Manager",
&zynq_fpga_ops, priv);
if (!mgr)
return -ENOMEM;
@ -624,7 +624,6 @@ static int zynq_fpga_probe(struct platform_device *pdev)
err = fpga_mgr_register(mgr);
if (err) {
dev_err(dev, "unable to register FPGA manager\n");
fpga_mgr_free(mgr);
clk_unprepare(priv->clk);
return err;
}

View File

@ -79,85 +79,96 @@ void vmbus_setevent(struct vmbus_channel *channel)
}
EXPORT_SYMBOL_GPL(vmbus_setevent);
/*
* vmbus_open - Open the specified channel.
*/
int vmbus_open(struct vmbus_channel *newchannel, u32 send_ringbuffer_size,
u32 recv_ringbuffer_size, void *userdata, u32 userdatalen,
void (*onchannelcallback)(void *context), void *context)
/* vmbus_free_ring - drop mapping of ring buffer */
void vmbus_free_ring(struct vmbus_channel *channel)
{
hv_ringbuffer_cleanup(&channel->outbound);
hv_ringbuffer_cleanup(&channel->inbound);
if (channel->ringbuffer_page) {
__free_pages(channel->ringbuffer_page,
get_order(channel->ringbuffer_pagecount
<< PAGE_SHIFT));
channel->ringbuffer_page = NULL;
}
}
EXPORT_SYMBOL_GPL(vmbus_free_ring);
/* vmbus_alloc_ring - allocate and map pages for ring buffer */
int vmbus_alloc_ring(struct vmbus_channel *newchannel,
u32 send_size, u32 recv_size)
{
struct page *page;
int order;
if (send_size % PAGE_SIZE || recv_size % PAGE_SIZE)
return -EINVAL;
/* Allocate the ring buffer */
order = get_order(send_size + recv_size);
page = alloc_pages_node(cpu_to_node(newchannel->target_cpu),
GFP_KERNEL|__GFP_ZERO, order);
if (!page)
page = alloc_pages(GFP_KERNEL|__GFP_ZERO, order);
if (!page)
return -ENOMEM;
newchannel->ringbuffer_page = page;
newchannel->ringbuffer_pagecount = (send_size + recv_size) >> PAGE_SHIFT;
newchannel->ringbuffer_send_offset = send_size >> PAGE_SHIFT;
return 0;
}
EXPORT_SYMBOL_GPL(vmbus_alloc_ring);
static int __vmbus_open(struct vmbus_channel *newchannel,
void *userdata, u32 userdatalen,
void (*onchannelcallback)(void *context), void *context)
{
struct vmbus_channel_open_channel *open_msg;
struct vmbus_channel_msginfo *open_info = NULL;
struct page *page = newchannel->ringbuffer_page;
u32 send_pages, recv_pages;
unsigned long flags;
int ret, err = 0;
struct page *page;
int err;
if (send_ringbuffer_size % PAGE_SIZE ||
recv_ringbuffer_size % PAGE_SIZE)
if (userdatalen > MAX_USER_DEFINED_BYTES)
return -EINVAL;
send_pages = newchannel->ringbuffer_send_offset;
recv_pages = newchannel->ringbuffer_pagecount - send_pages;
spin_lock_irqsave(&newchannel->lock, flags);
if (newchannel->state == CHANNEL_OPEN_STATE) {
newchannel->state = CHANNEL_OPENING_STATE;
} else {
if (newchannel->state != CHANNEL_OPEN_STATE) {
spin_unlock_irqrestore(&newchannel->lock, flags);
return -EINVAL;
}
spin_unlock_irqrestore(&newchannel->lock, flags);
newchannel->state = CHANNEL_OPENING_STATE;
newchannel->onchannel_callback = onchannelcallback;
newchannel->channel_callback_context = context;
/* Allocate the ring buffer */
page = alloc_pages_node(cpu_to_node(newchannel->target_cpu),
GFP_KERNEL|__GFP_ZERO,
get_order(send_ringbuffer_size +
recv_ringbuffer_size));
if (!page)
page = alloc_pages(GFP_KERNEL|__GFP_ZERO,
get_order(send_ringbuffer_size +
recv_ringbuffer_size));
if (!page) {
err = -ENOMEM;
goto error_set_chnstate;
}
newchannel->ringbuffer_pages = page_address(page);
newchannel->ringbuffer_pagecount = (send_ringbuffer_size +
recv_ringbuffer_size) >> PAGE_SHIFT;
ret = hv_ringbuffer_init(&newchannel->outbound, page,
send_ringbuffer_size >> PAGE_SHIFT);
if (ret != 0) {
err = ret;
goto error_free_pages;
}
ret = hv_ringbuffer_init(&newchannel->inbound,
&page[send_ringbuffer_size >> PAGE_SHIFT],
recv_ringbuffer_size >> PAGE_SHIFT);
if (ret != 0) {
err = ret;
goto error_free_pages;
}
err = hv_ringbuffer_init(&newchannel->outbound, page, send_pages);
if (err)
goto error_clean_ring;
err = hv_ringbuffer_init(&newchannel->inbound,
&page[send_pages], recv_pages);
if (err)
goto error_clean_ring;
/* Establish the gpadl for the ring buffer */
newchannel->ringbuffer_gpadlhandle = 0;
ret = vmbus_establish_gpadl(newchannel,
page_address(page),
send_ringbuffer_size +
recv_ringbuffer_size,
err = vmbus_establish_gpadl(newchannel,
page_address(newchannel->ringbuffer_page),
(send_pages + recv_pages) << PAGE_SHIFT,
&newchannel->ringbuffer_gpadlhandle);
if (ret != 0) {
err = ret;
goto error_free_pages;
}
if (err)
goto error_clean_ring;
/* Create and init the channel open message */
open_info = kmalloc(sizeof(*open_info) +
@ -176,15 +187,9 @@ int vmbus_open(struct vmbus_channel *newchannel, u32 send_ringbuffer_size,
open_msg->openid = newchannel->offermsg.child_relid;
open_msg->child_relid = newchannel->offermsg.child_relid;
open_msg->ringbuffer_gpadlhandle = newchannel->ringbuffer_gpadlhandle;
open_msg->downstream_ringbuffer_pageoffset = send_ringbuffer_size >>
PAGE_SHIFT;
open_msg->downstream_ringbuffer_pageoffset = newchannel->ringbuffer_send_offset;
open_msg->target_vp = newchannel->target_vp;
if (userdatalen > MAX_USER_DEFINED_BYTES) {
err = -EINVAL;
goto error_free_gpadl;
}
if (userdatalen)
memcpy(open_msg->userdata, userdata, userdatalen);
@ -195,18 +200,16 @@ int vmbus_open(struct vmbus_channel *newchannel, u32 send_ringbuffer_size,
if (newchannel->rescind) {
err = -ENODEV;
goto error_free_gpadl;
goto error_free_info;
}
ret = vmbus_post_msg(open_msg,
err = vmbus_post_msg(open_msg,
sizeof(struct vmbus_channel_open_channel), true);
trace_vmbus_open(open_msg, ret);
trace_vmbus_open(open_msg, err);
if (ret != 0) {
err = ret;
if (err != 0)
goto error_clean_msglist;
}
wait_for_completion(&open_info->waitevent);
@ -216,12 +219,12 @@ int vmbus_open(struct vmbus_channel *newchannel, u32 send_ringbuffer_size,
if (newchannel->rescind) {
err = -ENODEV;
goto error_free_gpadl;
goto error_free_info;
}
if (open_info->response.open_result.status) {
err = -EAGAIN;
goto error_free_gpadl;
goto error_free_info;
}
newchannel->state = CHANNEL_OPENED_STATE;
@ -232,19 +235,50 @@ error_clean_msglist:
spin_lock_irqsave(&vmbus_connection.channelmsg_lock, flags);
list_del(&open_info->msglistentry);
spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags);
error_free_info:
kfree(open_info);
error_free_gpadl:
vmbus_teardown_gpadl(newchannel, newchannel->ringbuffer_gpadlhandle);
kfree(open_info);
error_free_pages:
newchannel->ringbuffer_gpadlhandle = 0;
error_clean_ring:
hv_ringbuffer_cleanup(&newchannel->outbound);
hv_ringbuffer_cleanup(&newchannel->inbound);
__free_pages(page,
get_order(send_ringbuffer_size + recv_ringbuffer_size));
error_set_chnstate:
newchannel->state = CHANNEL_OPEN_STATE;
return err;
}
/*
* vmbus_connect_ring - Open the channel but reuse ring buffer
*/
int vmbus_connect_ring(struct vmbus_channel *newchannel,
void (*onchannelcallback)(void *context), void *context)
{
return __vmbus_open(newchannel, NULL, 0, onchannelcallback, context);
}
EXPORT_SYMBOL_GPL(vmbus_connect_ring);
/*
* vmbus_open - Open the specified channel.
*/
int vmbus_open(struct vmbus_channel *newchannel,
u32 send_ringbuffer_size, u32 recv_ringbuffer_size,
void *userdata, u32 userdatalen,
void (*onchannelcallback)(void *context), void *context)
{
int err;
err = vmbus_alloc_ring(newchannel, send_ringbuffer_size,
recv_ringbuffer_size);
if (err)
return err;
err = __vmbus_open(newchannel, userdata, userdatalen,
onchannelcallback, context);
if (err)
vmbus_free_ring(newchannel);
return err;
}
EXPORT_SYMBOL_GPL(vmbus_open);
/* Used for Hyper-V Socket: a guest client's connect() to the host */
@ -612,10 +646,8 @@ static int vmbus_close_internal(struct vmbus_channel *channel)
* in Hyper-V Manager), the driver's remove() invokes vmbus_close():
* here we should skip most of the below cleanup work.
*/
if (channel->state != CHANNEL_OPENED_STATE) {
ret = -EINVAL;
goto out;
}
if (channel->state != CHANNEL_OPENED_STATE)
return -EINVAL;
channel->state = CHANNEL_OPEN_STATE;
@ -637,11 +669,10 @@ static int vmbus_close_internal(struct vmbus_channel *channel)
* If we failed to post the close msg,
* it is perhaps better to leak memory.
*/
goto out;
}
/* Tear down the gpadl for the channel's ring buffer */
if (channel->ringbuffer_gpadlhandle) {
else if (channel->ringbuffer_gpadlhandle) {
ret = vmbus_teardown_gpadl(channel,
channel->ringbuffer_gpadlhandle);
if (ret) {
@ -650,74 +681,78 @@ static int vmbus_close_internal(struct vmbus_channel *channel)
* If we failed to teardown gpadl,
* it is perhaps better to leak memory.
*/
goto out;
}
channel->ringbuffer_gpadlhandle = 0;
}
/* Cleanup the ring buffers for this channel */
hv_ringbuffer_cleanup(&channel->outbound);
hv_ringbuffer_cleanup(&channel->inbound);
free_pages((unsigned long)channel->ringbuffer_pages,
get_order(channel->ringbuffer_pagecount * PAGE_SIZE));
out:
return ret;
}
/* disconnect ring - close all channels */
int vmbus_disconnect_ring(struct vmbus_channel *channel)
{
struct vmbus_channel *cur_channel, *tmp;
unsigned long flags;
LIST_HEAD(list);
int ret;
if (channel->primary_channel != NULL)
return -EINVAL;
/* Snapshot the list of subchannels */
spin_lock_irqsave(&channel->lock, flags);
list_splice_init(&channel->sc_list, &list);
channel->num_sc = 0;
spin_unlock_irqrestore(&channel->lock, flags);
list_for_each_entry_safe(cur_channel, tmp, &list, sc_list) {
if (cur_channel->rescind)
wait_for_completion(&cur_channel->rescind_event);
mutex_lock(&vmbus_connection.channel_mutex);
if (vmbus_close_internal(cur_channel) == 0) {
vmbus_free_ring(cur_channel);
if (cur_channel->rescind)
hv_process_channel_removal(cur_channel);
}
mutex_unlock(&vmbus_connection.channel_mutex);
}
/*
* Now close the primary.
*/
mutex_lock(&vmbus_connection.channel_mutex);
ret = vmbus_close_internal(channel);
mutex_unlock(&vmbus_connection.channel_mutex);
return ret;
}
EXPORT_SYMBOL_GPL(vmbus_disconnect_ring);
/*
* vmbus_close - Close the specified channel
*/
void vmbus_close(struct vmbus_channel *channel)
{
struct list_head *cur, *tmp;
struct vmbus_channel *cur_channel;
if (channel->primary_channel != NULL) {
/*
* We will only close sub-channels when
* the primary is closed.
*/
return;
}
/*
* Close all the sub-channels first and then close the
* primary channel.
*/
list_for_each_safe(cur, tmp, &channel->sc_list) {
cur_channel = list_entry(cur, struct vmbus_channel, sc_list);
if (cur_channel->rescind) {
wait_for_completion(&cur_channel->rescind_event);
mutex_lock(&vmbus_connection.channel_mutex);
vmbus_close_internal(cur_channel);
hv_process_channel_removal(
cur_channel->offermsg.child_relid);
} else {
mutex_lock(&vmbus_connection.channel_mutex);
vmbus_close_internal(cur_channel);
}
mutex_unlock(&vmbus_connection.channel_mutex);
}
/*
* Now close the primary.
*/
mutex_lock(&vmbus_connection.channel_mutex);
vmbus_close_internal(channel);
mutex_unlock(&vmbus_connection.channel_mutex);
if (vmbus_disconnect_ring(channel) == 0)
vmbus_free_ring(channel);
}
EXPORT_SYMBOL_GPL(vmbus_close);
/**
* vmbus_sendpacket() - Send the specified buffer on the given channel
* @channel: Pointer to vmbus_channel structure.
* @buffer: Pointer to the buffer you want to receive the data into.
* @bufferlen: Maximum size of what the the buffer will hold
* @channel: Pointer to vmbus_channel structure
* @buffer: Pointer to the buffer you want to send the data from.
* @bufferlen: Maximum size of what the buffer holds.
* @requestid: Identifier of the request
* @type: Type of packet that is being send e.g. negotiate, time
* packet etc.
* @type: Type of packet that is being sent e.g. negotiate, time
* packet etc.
* @flags: 0 or VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED
*
* Sends data in @buffer directly to hyper-v via the vmbus
* This will send the data unparsed to hyper-v.
* Sends data in @buffer directly to Hyper-V via the vmbus.
* This will send the data unparsed to Hyper-V.
*
* Mainly used by Hyper-V drivers.
*/
@ -850,12 +885,13 @@ int vmbus_sendpacket_mpb_desc(struct vmbus_channel *channel,
EXPORT_SYMBOL_GPL(vmbus_sendpacket_mpb_desc);
/**
* vmbus_recvpacket() - Retrieve the user packet on the specified channel
* @channel: Pointer to vmbus_channel structure.
* __vmbus_recvpacket() - Retrieve the user packet on the specified channel
* @channel: Pointer to vmbus_channel structure
* @buffer: Pointer to the buffer you want to receive the data into.
* @bufferlen: Maximum size of what the the buffer will hold
* @buffer_actual_len: The actual size of the data after it was received
* @bufferlen: Maximum size of what the buffer can hold.
* @buffer_actual_len: The actual size of the data after it was received.
* @requestid: Identifier of the request
* @raw: true means keep the vmpacket_descriptor header in the received data.
*
* Receives directly from the hyper-v vmbus and puts the data it received
* into Buffer. This will receive the data unparsed from hyper-v.

View File

@ -198,24 +198,19 @@ static u16 hv_get_dev_type(const struct vmbus_channel *channel)
}
/**
* vmbus_prep_negotiate_resp() - Create default response for Hyper-V Negotiate message
* vmbus_prep_negotiate_resp() - Create default response for Negotiate message
* @icmsghdrp: Pointer to msg header structure
* @icmsg_negotiate: Pointer to negotiate message structure
* @buf: Raw buffer channel data
* @fw_version: The framework versions we can support.
* @fw_vercnt: The size of @fw_version.
* @srv_version: The service versions we can support.
* @srv_vercnt: The size of @srv_version.
* @nego_fw_version: The selected framework version.
* @nego_srv_version: The selected service version.
*
* Note: Versions are given in decreasing order.
*
* @icmsghdrp is of type &struct icmsg_hdr.
* Set up and fill in default negotiate response message.
*
* The fw_version and fw_vercnt specifies the framework version that
* we can support.
*
* The srv_version and srv_vercnt specifies the service
* versions we can support.
*
* Versions are given in decreasing order.
*
* nego_fw_version and nego_srv_version store the selected protocol versions.
*
* Mainly used by Hyper-V drivers.
*/
bool vmbus_prep_negotiate_resp(struct icmsg_hdr *icmsghdrp,
@ -385,21 +380,14 @@ static void vmbus_release_relid(u32 relid)
trace_vmbus_release_relid(&msg, ret);
}
void hv_process_channel_removal(u32 relid)
void hv_process_channel_removal(struct vmbus_channel *channel)
{
struct vmbus_channel *primary_channel;
unsigned long flags;
struct vmbus_channel *primary_channel, *channel;
BUG_ON(!mutex_is_locked(&vmbus_connection.channel_mutex));
/*
* Make sure channel is valid as we may have raced.
*/
channel = relid2channel(relid);
if (!channel)
return;
BUG_ON(!channel->rescind);
if (channel->target_cpu != get_cpu()) {
put_cpu();
smp_call_function_single(channel->target_cpu,
@ -429,7 +417,7 @@ void hv_process_channel_removal(u32 relid)
cpumask_clear_cpu(channel->target_cpu,
&primary_channel->alloced_cpus_in_node);
vmbus_release_relid(relid);
vmbus_release_relid(channel->offermsg.child_relid);
free_channel(channel);
}
@ -606,16 +594,18 @@ static void init_vp_index(struct vmbus_channel *channel, u16 dev_type)
bool perf_chn = vmbus_devs[dev_type].perf_device;
struct vmbus_channel *primary = channel->primary_channel;
int next_node;
struct cpumask available_mask;
cpumask_var_t available_mask;
struct cpumask *alloced_mask;
if ((vmbus_proto_version == VERSION_WS2008) ||
(vmbus_proto_version == VERSION_WIN7) || (!perf_chn)) {
(vmbus_proto_version == VERSION_WIN7) || (!perf_chn) ||
!alloc_cpumask_var(&available_mask, GFP_KERNEL)) {
/*
* Prior to win8, all channel interrupts are
* delivered on cpu 0.
* Also if the channel is not a performance critical
* channel, bind it to cpu 0.
* In case alloc_cpumask_var() fails, bind it to cpu 0.
*/
channel->numa_node = 0;
channel->target_cpu = 0;
@ -653,7 +643,7 @@ static void init_vp_index(struct vmbus_channel *channel, u16 dev_type)
cpumask_clear(alloced_mask);
}
cpumask_xor(&available_mask, alloced_mask,
cpumask_xor(available_mask, alloced_mask,
cpumask_of_node(primary->numa_node));
cur_cpu = -1;
@ -671,10 +661,10 @@ static void init_vp_index(struct vmbus_channel *channel, u16 dev_type)
}
while (true) {
cur_cpu = cpumask_next(cur_cpu, &available_mask);
cur_cpu = cpumask_next(cur_cpu, available_mask);
if (cur_cpu >= nr_cpu_ids) {
cur_cpu = -1;
cpumask_copy(&available_mask,
cpumask_copy(available_mask,
cpumask_of_node(primary->numa_node));
continue;
}
@ -704,6 +694,8 @@ static void init_vp_index(struct vmbus_channel *channel, u16 dev_type)
channel->target_cpu = cur_cpu;
channel->target_vp = hv_cpu_number_to_vp_number(cur_cpu);
free_cpumask_var(available_mask);
}
static void vmbus_wait_for_unload(void)
@ -943,7 +935,7 @@ static void vmbus_onoffer_rescind(struct vmbus_channel_message_header *hdr)
* The channel is currently not open;
* it is safe for us to cleanup the channel.
*/
hv_process_channel_removal(rescind->child_relid);
hv_process_channel_removal(channel);
} else {
complete(&channel->rescind_event);
}

View File

@ -189,6 +189,17 @@ static void hv_init_clockevent_device(struct clock_event_device *dev, int cpu)
int hv_synic_alloc(void)
{
int cpu;
struct hv_per_cpu_context *hv_cpu;
/*
* First, zero all per-cpu memory areas so hv_synic_free() can
* detect what memory has been allocated and cleanup properly
* after any failures.
*/
for_each_present_cpu(cpu) {
hv_cpu = per_cpu_ptr(hv_context.cpu_context, cpu);
memset(hv_cpu, 0, sizeof(*hv_cpu));
}
hv_context.hv_numa_map = kcalloc(nr_node_ids, sizeof(struct cpumask),
GFP_KERNEL);
@ -198,10 +209,8 @@ int hv_synic_alloc(void)
}
for_each_present_cpu(cpu) {
struct hv_per_cpu_context *hv_cpu
= per_cpu_ptr(hv_context.cpu_context, cpu);
hv_cpu = per_cpu_ptr(hv_context.cpu_context, cpu);
memset(hv_cpu, 0, sizeof(*hv_cpu));
tasklet_init(&hv_cpu->msg_dpc,
vmbus_on_msg_dpc, (unsigned long) hv_cpu);

View File

@ -689,7 +689,7 @@ static void hv_page_online_one(struct hv_hotadd_state *has, struct page *pg)
__online_page_increment_counters(pg);
__online_page_free(pg);
WARN_ON_ONCE(!spin_is_locked(&dm_device.ha_lock));
lockdep_assert_held(&dm_device.ha_lock);
dm_device.num_pages_onlined++;
}

View File

@ -353,7 +353,6 @@ static void process_ib_ipinfo(void *in_msg, void *out_msg, int op)
out->body.kvp_ip_val.dhcp_enabled = in->kvp_ip_val.dhcp_enabled;
default:
utf16s_to_utf8s((wchar_t *)in->kvp_ip_val.adapter_id,
MAX_ADAPTER_ID_SIZE,
UTF16_LITTLE_ENDIAN,
@ -406,7 +405,7 @@ kvp_send_key(struct work_struct *dummy)
process_ib_ipinfo(in_msg, message, KVP_OP_SET_IP_INFO);
break;
case KVP_OP_GET_IP_INFO:
process_ib_ipinfo(in_msg, message, KVP_OP_GET_IP_INFO);
/* We only need to pass on message->kvp_hdr.operation. */
break;
case KVP_OP_SET:
switch (in_msg->body.kvp_set.data.value_type) {
@ -421,7 +420,7 @@ kvp_send_key(struct work_struct *dummy)
UTF16_LITTLE_ENDIAN,
message->body.kvp_set.data.value,
HV_KVP_EXCHANGE_MAX_VALUE_SIZE - 1) + 1;
break;
break;
case REG_U32:
/*
@ -446,6 +445,9 @@ kvp_send_key(struct work_struct *dummy)
break;
}
break;
case KVP_OP_GET:
message->body.kvp_set.data.key_size =
utf16s_to_utf8s(
@ -454,7 +456,7 @@ kvp_send_key(struct work_struct *dummy)
UTF16_LITTLE_ENDIAN,
message->body.kvp_set.data.key,
HV_KVP_EXCHANGE_MAX_KEY_SIZE - 1) + 1;
break;
break;
case KVP_OP_DELETE:
message->body.kvp_delete.key_size =
@ -464,12 +466,12 @@ kvp_send_key(struct work_struct *dummy)
UTF16_LITTLE_ENDIAN,
message->body.kvp_delete.key,
HV_KVP_EXCHANGE_MAX_KEY_SIZE - 1) + 1;
break;
break;
case KVP_OP_ENUMERATE:
message->body.kvp_enum_data.index =
in_msg->body.kvp_enum_data.index;
break;
break;
}
kvp_transaction.state = HVUTIL_USERSPACE_REQ;

View File

@ -241,6 +241,7 @@ int hv_ringbuffer_init(struct hv_ring_buffer_info *ring_info,
void hv_ringbuffer_cleanup(struct hv_ring_buffer_info *ring_info)
{
vunmap(ring_info->ring_buffer);
ring_info->ring_buffer = NULL;
}
/* Write to the ring buffer. */

View File

@ -498,6 +498,54 @@ static ssize_t device_show(struct device *dev,
}
static DEVICE_ATTR_RO(device);
static ssize_t driver_override_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
struct hv_device *hv_dev = device_to_hv_device(dev);
char *driver_override, *old, *cp;
/* We need to keep extra room for a newline */
if (count >= (PAGE_SIZE - 1))
return -EINVAL;
driver_override = kstrndup(buf, count, GFP_KERNEL);
if (!driver_override)
return -ENOMEM;
cp = strchr(driver_override, '\n');
if (cp)
*cp = '\0';
device_lock(dev);
old = hv_dev->driver_override;
if (strlen(driver_override)) {
hv_dev->driver_override = driver_override;
} else {
kfree(driver_override);
hv_dev->driver_override = NULL;
}
device_unlock(dev);
kfree(old);
return count;
}
static ssize_t driver_override_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct hv_device *hv_dev = device_to_hv_device(dev);
ssize_t len;
device_lock(dev);
len = snprintf(buf, PAGE_SIZE, "%s\n", hv_dev->driver_override);
device_unlock(dev);
return len;
}
static DEVICE_ATTR_RW(driver_override);
/* Set up per device attributes in /sys/bus/vmbus/devices/<bus device> */
static struct attribute *vmbus_dev_attrs[] = {
&dev_attr_id.attr,
@ -528,6 +576,7 @@ static struct attribute *vmbus_dev_attrs[] = {
&dev_attr_channel_vp_mapping.attr,
&dev_attr_vendor.attr,
&dev_attr_device.attr,
&dev_attr_driver_override.attr,
NULL,
};
ATTRIBUTE_GROUPS(vmbus_dev);
@ -563,17 +612,26 @@ static inline bool is_null_guid(const uuid_le *guid)
return true;
}
/*
* Return a matching hv_vmbus_device_id pointer.
* If there is no match, return NULL.
*/
static const struct hv_vmbus_device_id *hv_vmbus_get_id(struct hv_driver *drv,
const uuid_le *guid)
static const struct hv_vmbus_device_id *
hv_vmbus_dev_match(const struct hv_vmbus_device_id *id, const uuid_le *guid)
{
if (id == NULL)
return NULL; /* empty device table */
for (; !is_null_guid(&id->guid); id++)
if (!uuid_le_cmp(id->guid, *guid))
return id;
return NULL;
}
static const struct hv_vmbus_device_id *
hv_vmbus_dynid_match(struct hv_driver *drv, const uuid_le *guid)
{
const struct hv_vmbus_device_id *id = NULL;
struct vmbus_dynid *dynid;
/* Look at the dynamic ids first, before the static ones */
spin_lock(&drv->dynids.lock);
list_for_each_entry(dynid, &drv->dynids.list, node) {
if (!uuid_le_cmp(dynid->id.guid, *guid)) {
@ -583,18 +641,37 @@ static const struct hv_vmbus_device_id *hv_vmbus_get_id(struct hv_driver *drv,
}
spin_unlock(&drv->dynids.lock);
if (id)
return id;
return id;
}
id = drv->id_table;
if (id == NULL)
return NULL; /* empty device table */
static const struct hv_vmbus_device_id vmbus_device_null = {
.guid = NULL_UUID_LE,
};
for (; !is_null_guid(&id->guid); id++)
if (!uuid_le_cmp(id->guid, *guid))
return id;
/*
* Return a matching hv_vmbus_device_id pointer.
* If there is no match, return NULL.
*/
static const struct hv_vmbus_device_id *hv_vmbus_get_id(struct hv_driver *drv,
struct hv_device *dev)
{
const uuid_le *guid = &dev->dev_type;
const struct hv_vmbus_device_id *id;
return NULL;
/* When driver_override is set, only bind to the matching driver */
if (dev->driver_override && strcmp(dev->driver_override, drv->name))
return NULL;
/* Look at the dynamic ids first, before the static ones */
id = hv_vmbus_dynid_match(drv, guid);
if (!id)
id = hv_vmbus_dev_match(drv->id_table, guid);
/* driver_override will always match, send a dummy id */
if (!id && dev->driver_override)
id = &vmbus_device_null;
return id;
}
/* vmbus_add_dynid - add a new device ID to this driver and re-probe devices */
@ -643,7 +720,7 @@ static ssize_t new_id_store(struct device_driver *driver, const char *buf,
if (retval)
return retval;
if (hv_vmbus_get_id(drv, &guid))
if (hv_vmbus_dynid_match(drv, &guid))
return -EEXIST;
retval = vmbus_add_dynid(drv, &guid);
@ -708,7 +785,7 @@ static int vmbus_match(struct device *device, struct device_driver *driver)
if (is_hvsock_channel(hv_dev->channel))
return drv->hvsock;
if (hv_vmbus_get_id(drv, &hv_dev->dev_type))
if (hv_vmbus_get_id(drv, hv_dev))
return 1;
return 0;
@ -725,7 +802,7 @@ static int vmbus_probe(struct device *child_device)
struct hv_device *dev = device_to_hv_device(child_device);
const struct hv_vmbus_device_id *dev_id;
dev_id = hv_vmbus_get_id(drv, &dev->dev_type);
dev_id = hv_vmbus_get_id(drv, dev);
if (drv->probe) {
ret = drv->probe(dev, dev_id);
if (ret != 0)
@ -787,10 +864,9 @@ static void vmbus_device_release(struct device *device)
struct vmbus_channel *channel = hv_dev->channel;
mutex_lock(&vmbus_connection.channel_mutex);
hv_process_channel_removal(channel->offermsg.child_relid);
hv_process_channel_removal(channel);
mutex_unlock(&vmbus_connection.channel_mutex);
kfree(hv_dev);
}
/* The one and only one */

View File

@ -406,6 +406,7 @@ static inline int catu_wait_for_ready(struct catu_drvdata *drvdata)
static int catu_enable_hw(struct catu_drvdata *drvdata, void *data)
{
int rc;
u32 control, mode;
struct etr_buf *etr_buf = data;
@ -418,6 +419,10 @@ static int catu_enable_hw(struct catu_drvdata *drvdata, void *data)
return -EBUSY;
}
rc = coresight_claim_device_unlocked(drvdata->base);
if (rc)
return rc;
control |= BIT(CATU_CONTROL_ENABLE);
if (etr_buf && etr_buf->mode == ETR_MODE_CATU) {
@ -459,6 +464,7 @@ static int catu_disable_hw(struct catu_drvdata *drvdata)
int rc = 0;
catu_write_control(drvdata, 0);
coresight_disclaim_device_unlocked(drvdata->base);
if (catu_wait_for_ready(drvdata)) {
dev_info(drvdata->dev, "Timeout while waiting for READY\n");
rc = -EAGAIN;

View File

@ -34,48 +34,87 @@ struct replicator_state {
struct coresight_device *csdev;
};
/*
* replicator_reset : Reset the replicator configuration to sane values.
*/
static void replicator_reset(struct replicator_state *drvdata)
{
CS_UNLOCK(drvdata->base);
if (!coresight_claim_device_unlocked(drvdata->base)) {
writel_relaxed(0xff, drvdata->base + REPLICATOR_IDFILTER0);
writel_relaxed(0xff, drvdata->base + REPLICATOR_IDFILTER1);
coresight_disclaim_device_unlocked(drvdata->base);
}
CS_LOCK(drvdata->base);
}
static int replicator_enable(struct coresight_device *csdev, int inport,
int outport)
{
int rc = 0;
u32 reg;
struct replicator_state *drvdata = dev_get_drvdata(csdev->dev.parent);
switch (outport) {
case 0:
reg = REPLICATOR_IDFILTER0;
break;
case 1:
reg = REPLICATOR_IDFILTER1;
break;
default:
WARN_ON(1);
return -EINVAL;
}
CS_UNLOCK(drvdata->base);
/*
* Ensure that the other port is disabled
* 0x00 - passing through the replicator unimpeded
* 0xff - disable (or impede) the flow of ATB data
*/
if (outport == 0) {
writel_relaxed(0x00, drvdata->base + REPLICATOR_IDFILTER0);
writel_relaxed(0xff, drvdata->base + REPLICATOR_IDFILTER1);
} else {
writel_relaxed(0x00, drvdata->base + REPLICATOR_IDFILTER1);
writel_relaxed(0xff, drvdata->base + REPLICATOR_IDFILTER0);
if ((readl_relaxed(drvdata->base + REPLICATOR_IDFILTER0) == 0xff) &&
(readl_relaxed(drvdata->base + REPLICATOR_IDFILTER1) == 0xff))
rc = coresight_claim_device_unlocked(drvdata->base);
/* Ensure that the outport is enabled. */
if (!rc) {
writel_relaxed(0x00, drvdata->base + reg);
dev_dbg(drvdata->dev, "REPLICATOR enabled\n");
}
CS_LOCK(drvdata->base);
dev_info(drvdata->dev, "REPLICATOR enabled\n");
return 0;
return rc;
}
static void replicator_disable(struct coresight_device *csdev, int inport,
int outport)
{
u32 reg;
struct replicator_state *drvdata = dev_get_drvdata(csdev->dev.parent);
switch (outport) {
case 0:
reg = REPLICATOR_IDFILTER0;
break;
case 1:
reg = REPLICATOR_IDFILTER1;
break;
default:
WARN_ON(1);
return;
}
CS_UNLOCK(drvdata->base);
/* disable the flow of ATB data through port */
if (outport == 0)
writel_relaxed(0xff, drvdata->base + REPLICATOR_IDFILTER0);
else
writel_relaxed(0xff, drvdata->base + REPLICATOR_IDFILTER1);
writel_relaxed(0xff, drvdata->base + reg);
if ((readl_relaxed(drvdata->base + REPLICATOR_IDFILTER0) == 0xff) &&
(readl_relaxed(drvdata->base + REPLICATOR_IDFILTER1) == 0xff))
coresight_disclaim_device_unlocked(drvdata->base);
CS_LOCK(drvdata->base);
dev_info(drvdata->dev, "REPLICATOR disabled\n");
dev_dbg(drvdata->dev, "REPLICATOR disabled\n");
}
static const struct coresight_ops_link replicator_link_ops = {
@ -156,7 +195,11 @@ static int replicator_probe(struct amba_device *adev, const struct amba_id *id)
desc.groups = replicator_groups;
drvdata->csdev = coresight_register(&desc);
return PTR_ERR_OR_ZERO(drvdata->csdev);
if (!IS_ERR(drvdata->csdev)) {
replicator_reset(drvdata);
return 0;
}
return PTR_ERR(drvdata->csdev);
}
#ifdef CONFIG_PM

View File

@ -5,7 +5,6 @@
* Description: CoreSight Embedded Trace Buffer driver
*/
#include <asm/local.h>
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/types.h>
@ -28,6 +27,7 @@
#include "coresight-priv.h"
#include "coresight-etm-perf.h"
#define ETB_RAM_DEPTH_REG 0x004
#define ETB_STATUS_REG 0x00c
@ -71,8 +71,8 @@
* @miscdev: specifics to handle "/dev/xyz.etb" entry.
* @spinlock: only one at a time pls.
* @reading: synchronise user space access to etb buffer.
* @mode: this ETB is being used.
* @buf: area of memory where ETB buffer content gets sent.
* @mode: this ETB is being used.
* @buffer_depth: size of @buf.
* @trigger_cntr: amount of words to store after a trigger.
*/
@ -84,12 +84,15 @@ struct etb_drvdata {
struct miscdevice miscdev;
spinlock_t spinlock;
local_t reading;
local_t mode;
u8 *buf;
u32 mode;
u32 buffer_depth;
u32 trigger_cntr;
};
static int etb_set_buffer(struct coresight_device *csdev,
struct perf_output_handle *handle);
static unsigned int etb_get_buffer_depth(struct etb_drvdata *drvdata)
{
u32 depth = 0;
@ -103,7 +106,7 @@ static unsigned int etb_get_buffer_depth(struct etb_drvdata *drvdata)
return depth;
}
static void etb_enable_hw(struct etb_drvdata *drvdata)
static void __etb_enable_hw(struct etb_drvdata *drvdata)
{
int i;
u32 depth;
@ -131,32 +134,92 @@ static void etb_enable_hw(struct etb_drvdata *drvdata)
CS_LOCK(drvdata->base);
}
static int etb_enable(struct coresight_device *csdev, u32 mode)
static int etb_enable_hw(struct etb_drvdata *drvdata)
{
u32 val;
__etb_enable_hw(drvdata);
return 0;
}
static int etb_enable_sysfs(struct coresight_device *csdev)
{
int ret = 0;
unsigned long flags;
struct etb_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
val = local_cmpxchg(&drvdata->mode,
CS_MODE_DISABLED, mode);
/*
* When accessing from Perf, a HW buffer can be handled
* by a single trace entity. In sysFS mode many tracers
* can be logging to the same HW buffer.
*/
if (val == CS_MODE_PERF)
return -EBUSY;
spin_lock_irqsave(&drvdata->spinlock, flags);
/* Don't messup with perf sessions. */
if (drvdata->mode == CS_MODE_PERF) {
ret = -EBUSY;
goto out;
}
/* Nothing to do, the tracer is already enabled. */
if (val == CS_MODE_SYSFS)
if (drvdata->mode == CS_MODE_SYSFS)
goto out;
spin_lock_irqsave(&drvdata->spinlock, flags);
etb_enable_hw(drvdata);
spin_unlock_irqrestore(&drvdata->spinlock, flags);
ret = etb_enable_hw(drvdata);
if (!ret)
drvdata->mode = CS_MODE_SYSFS;
out:
dev_info(drvdata->dev, "ETB enabled\n");
spin_unlock_irqrestore(&drvdata->spinlock, flags);
return ret;
}
static int etb_enable_perf(struct coresight_device *csdev, void *data)
{
int ret = 0;
unsigned long flags;
struct etb_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
spin_lock_irqsave(&drvdata->spinlock, flags);
/* No need to continue if the component is already in use. */
if (drvdata->mode != CS_MODE_DISABLED) {
ret = -EBUSY;
goto out;
}
/*
* We don't have an internal state to clean up if we fail to setup
* the perf buffer. So we can perform the step before we turn the
* ETB on and leave without cleaning up.
*/
ret = etb_set_buffer(csdev, (struct perf_output_handle *)data);
if (ret)
goto out;
ret = etb_enable_hw(drvdata);
if (!ret)
drvdata->mode = CS_MODE_PERF;
out:
spin_unlock_irqrestore(&drvdata->spinlock, flags);
return ret;
}
static int etb_enable(struct coresight_device *csdev, u32 mode, void *data)
{
int ret;
struct etb_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
switch (mode) {
case CS_MODE_SYSFS:
ret = etb_enable_sysfs(csdev);
break;
case CS_MODE_PERF:
ret = etb_enable_perf(csdev, data);
break;
default:
ret = -EINVAL;
break;
}
if (ret)
return ret;
dev_dbg(drvdata->dev, "ETB enabled\n");
return 0;
}
@ -256,13 +319,16 @@ static void etb_disable(struct coresight_device *csdev)
unsigned long flags;
spin_lock_irqsave(&drvdata->spinlock, flags);
etb_disable_hw(drvdata);
etb_dump_hw(drvdata);
/* Disable the ETB only if it needs to */
if (drvdata->mode != CS_MODE_DISABLED) {
etb_disable_hw(drvdata);
etb_dump_hw(drvdata);
drvdata->mode = CS_MODE_DISABLED;
}
spin_unlock_irqrestore(&drvdata->spinlock, flags);
local_set(&drvdata->mode, CS_MODE_DISABLED);
dev_info(drvdata->dev, "ETB disabled\n");
dev_dbg(drvdata->dev, "ETB disabled\n");
}
static void *etb_alloc_buffer(struct coresight_device *csdev, int cpu,
@ -294,12 +360,14 @@ static void etb_free_buffer(void *config)
}
static int etb_set_buffer(struct coresight_device *csdev,
struct perf_output_handle *handle,
void *sink_config)
struct perf_output_handle *handle)
{
int ret = 0;
unsigned long head;
struct cs_buffers *buf = sink_config;
struct cs_buffers *buf = etm_perf_sink_config(handle);
if (!buf)
return -EINVAL;
/* wrap head around to the amount of space we have */
head = handle->head & ((buf->nr_pages << PAGE_SHIFT) - 1);
@ -315,37 +383,7 @@ static int etb_set_buffer(struct coresight_device *csdev,
return ret;
}
static unsigned long etb_reset_buffer(struct coresight_device *csdev,
struct perf_output_handle *handle,
void *sink_config)
{
unsigned long size = 0;
struct cs_buffers *buf = sink_config;
if (buf) {
/*
* In snapshot mode ->data_size holds the new address of the
* ring buffer's head. The size itself is the whole address
* range since we want the latest information.
*/
if (buf->snapshot)
handle->head = local_xchg(&buf->data_size,
buf->nr_pages << PAGE_SHIFT);
/*
* Tell the tracer PMU how much we got in this run and if
* something went wrong along the way. Nobody else can use
* this cs_buffers instance until we are done. As such
* resetting parameters here and squaring off with the ring
* buffer API in the tracer PMU is fine.
*/
size = local_xchg(&buf->data_size, 0);
}
return size;
}
static void etb_update_buffer(struct coresight_device *csdev,
static unsigned long etb_update_buffer(struct coresight_device *csdev,
struct perf_output_handle *handle,
void *sink_config)
{
@ -354,13 +392,13 @@ static void etb_update_buffer(struct coresight_device *csdev,
u8 *buf_ptr;
const u32 *barrier;
u32 read_ptr, write_ptr, capacity;
u32 status, read_data, to_read;
unsigned long offset;
u32 status, read_data;
unsigned long offset, to_read;
struct cs_buffers *buf = sink_config;
struct etb_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
if (!buf)
return;
return 0;
capacity = drvdata->buffer_depth * ETB_FRAME_SIZE_WORDS;
@ -465,18 +503,17 @@ static void etb_update_buffer(struct coresight_device *csdev,
writel_relaxed(0x0, drvdata->base + ETB_RAM_WRITE_POINTER);
/*
* In snapshot mode all we have to do is communicate to
* perf_aux_output_end() the address of the current head. In full
* trace mode the same function expects a size to move rb->aux_head
* forward.
* In snapshot mode we have to update the handle->head to point
* to the new location.
*/
if (buf->snapshot)
local_set(&buf->data_size, (cur * PAGE_SIZE) + offset);
else
local_add(to_read, &buf->data_size);
if (buf->snapshot) {
handle->head = (cur * PAGE_SIZE) + offset;
to_read = buf->nr_pages << PAGE_SHIFT;
}
etb_enable_hw(drvdata);
CS_LOCK(drvdata->base);
return to_read;
}
static const struct coresight_ops_sink etb_sink_ops = {
@ -484,8 +521,6 @@ static const struct coresight_ops_sink etb_sink_ops = {
.disable = etb_disable,
.alloc_buffer = etb_alloc_buffer,
.free_buffer = etb_free_buffer,
.set_buffer = etb_set_buffer,
.reset_buffer = etb_reset_buffer,
.update_buffer = etb_update_buffer,
};
@ -498,14 +533,14 @@ static void etb_dump(struct etb_drvdata *drvdata)
unsigned long flags;
spin_lock_irqsave(&drvdata->spinlock, flags);
if (local_read(&drvdata->mode) == CS_MODE_SYSFS) {
if (drvdata->mode == CS_MODE_SYSFS) {
etb_disable_hw(drvdata);
etb_dump_hw(drvdata);
etb_enable_hw(drvdata);
}
spin_unlock_irqrestore(&drvdata->spinlock, flags);
dev_info(drvdata->dev, "ETB dumped\n");
dev_dbg(drvdata->dev, "ETB dumped\n");
}
static int etb_open(struct inode *inode, struct file *file)

View File

@ -12,6 +12,7 @@
#include <linux/mm.h>
#include <linux/init.h>
#include <linux/perf_event.h>
#include <linux/percpu-defs.h>
#include <linux/slab.h>
#include <linux/types.h>
#include <linux/workqueue.h>
@ -22,20 +23,6 @@
static struct pmu etm_pmu;
static bool etm_perf_up;
/**
* struct etm_event_data - Coresight specifics associated to an event
* @work: Handle to free allocated memory outside IRQ context.
* @mask: Hold the CPU(s) this event was set for.
* @snk_config: The sink configuration.
* @path: An array of path, each slot for one CPU.
*/
struct etm_event_data {
struct work_struct work;
cpumask_t mask;
void *snk_config;
struct list_head **path;
};
static DEFINE_PER_CPU(struct perf_output_handle, ctx_handle);
static DEFINE_PER_CPU(struct coresight_device *, csdev_src);
@ -61,6 +48,18 @@ static const struct attribute_group *etm_pmu_attr_groups[] = {
NULL,
};
static inline struct list_head **
etm_event_cpu_path_ptr(struct etm_event_data *data, int cpu)
{
return per_cpu_ptr(data->path, cpu);
}
static inline struct list_head *
etm_event_cpu_path(struct etm_event_data *data, int cpu)
{
return *etm_event_cpu_path_ptr(data, cpu);
}
static void etm_event_read(struct perf_event *event) {}
static int etm_addr_filters_alloc(struct perf_event *event)
@ -114,29 +113,30 @@ static void free_event_data(struct work_struct *work)
event_data = container_of(work, struct etm_event_data, work);
mask = &event_data->mask;
/*
* First deal with the sink configuration. See comment in
* etm_setup_aux() about why we take the first available path.
*/
if (event_data->snk_config) {
/* Free the sink buffers, if there are any */
if (event_data->snk_config && !WARN_ON(cpumask_empty(mask))) {
cpu = cpumask_first(mask);
sink = coresight_get_sink(event_data->path[cpu]);
sink = coresight_get_sink(etm_event_cpu_path(event_data, cpu));
if (sink_ops(sink)->free_buffer)
sink_ops(sink)->free_buffer(event_data->snk_config);
}
for_each_cpu(cpu, mask) {
if (!(IS_ERR_OR_NULL(event_data->path[cpu])))
coresight_release_path(event_data->path[cpu]);
struct list_head **ppath;
ppath = etm_event_cpu_path_ptr(event_data, cpu);
if (!(IS_ERR_OR_NULL(*ppath)))
coresight_release_path(*ppath);
*ppath = NULL;
}
kfree(event_data->path);
free_percpu(event_data->path);
kfree(event_data);
}
static void *alloc_event_data(int cpu)
{
int size;
cpumask_t *mask;
struct etm_event_data *event_data;
@ -145,16 +145,12 @@ static void *alloc_event_data(int cpu)
if (!event_data)
return NULL;
/* Make sure nothing disappears under us */
get_online_cpus();
size = num_online_cpus();
mask = &event_data->mask;
if (cpu != -1)
cpumask_set_cpu(cpu, mask);
else
cpumask_copy(mask, cpu_online_mask);
put_online_cpus();
cpumask_copy(mask, cpu_present_mask);
/*
* Each CPU has a single path between source and destination. As such
@ -164,8 +160,8 @@ static void *alloc_event_data(int cpu)
* unused memory when dealing with single CPU trace scenarios is small
* compared to the cost of searching through an optimized array.
*/
event_data->path = kcalloc(size,
sizeof(struct list_head *), GFP_KERNEL);
event_data->path = alloc_percpu(struct list_head *);
if (!event_data->path) {
kfree(event_data);
return NULL;
@ -206,34 +202,53 @@ static void *etm_setup_aux(int event_cpu, void **pages,
* on the cmd line. As such the "enable_sink" flag in sysFS is reset.
*/
sink = coresight_get_enabled_sink(true);
if (!sink)
if (!sink || !sink_ops(sink)->alloc_buffer)
goto err;
mask = &event_data->mask;
/* Setup the path for each CPU in a trace session */
/*
* Setup the path for each CPU in a trace session. We try to build
* trace path for each CPU in the mask. If we don't find an ETM
* for the CPU or fail to build a path, we clear the CPU from the
* mask and continue with the rest. If ever we try to trace on those
* CPUs, we can handle it and fail the session.
*/
for_each_cpu(cpu, mask) {
struct list_head *path;
struct coresight_device *csdev;
csdev = per_cpu(csdev_src, cpu);
if (!csdev)
goto err;
/*
* If there is no ETM associated with this CPU clear it from
* the mask and continue with the rest. If ever we try to trace
* on this CPU, we handle it accordingly.
*/
if (!csdev) {
cpumask_clear_cpu(cpu, mask);
continue;
}
/*
* Building a path doesn't enable it, it simply builds a
* list of devices from source to sink that can be
* referenced later when the path is actually needed.
*/
event_data->path[cpu] = coresight_build_path(csdev, sink);
if (IS_ERR(event_data->path[cpu]))
goto err;
path = coresight_build_path(csdev, sink);
if (IS_ERR(path)) {
cpumask_clear_cpu(cpu, mask);
continue;
}
*etm_event_cpu_path_ptr(event_data, cpu) = path;
}
if (!sink_ops(sink)->alloc_buffer)
/* If we don't have any CPUs ready for tracing, abort */
cpu = cpumask_first(mask);
if (cpu >= nr_cpu_ids)
goto err;
cpu = cpumask_first(mask);
/* Get the AUX specific data from the sink buffer */
/* Allocate the sink buffer for this session */
event_data->snk_config =
sink_ops(sink)->alloc_buffer(sink, cpu, pages,
nr_pages, overwrite);
@ -255,6 +270,7 @@ static void etm_event_start(struct perf_event *event, int flags)
struct etm_event_data *event_data;
struct perf_output_handle *handle = this_cpu_ptr(&ctx_handle);
struct coresight_device *sink, *csdev = per_cpu(csdev_src, cpu);
struct list_head *path;
if (!csdev)
goto fail;
@ -267,18 +283,14 @@ static void etm_event_start(struct perf_event *event, int flags)
if (!event_data)
goto fail;
path = etm_event_cpu_path(event_data, cpu);
/* We need a sink, no need to continue without one */
sink = coresight_get_sink(event_data->path[cpu]);
if (WARN_ON_ONCE(!sink || !sink_ops(sink)->set_buffer))
goto fail_end_stop;
/* Configure the sink */
if (sink_ops(sink)->set_buffer(sink, handle,
event_data->snk_config))
sink = coresight_get_sink(path);
if (WARN_ON_ONCE(!sink))
goto fail_end_stop;
/* Nothing will happen without a path */
if (coresight_enable_path(event_data->path[cpu], CS_MODE_PERF))
if (coresight_enable_path(path, CS_MODE_PERF, handle))
goto fail_end_stop;
/* Tell the perf core the event is alive */
@ -286,11 +298,13 @@ static void etm_event_start(struct perf_event *event, int flags)
/* Finally enable the tracer */
if (source_ops(csdev)->enable(csdev, event, CS_MODE_PERF))
goto fail_end_stop;
goto fail_disable_path;
out:
return;
fail_disable_path:
coresight_disable_path(path);
fail_end_stop:
perf_aux_output_flag(handle, PERF_AUX_FLAG_TRUNCATED);
perf_aux_output_end(handle, 0);
@ -306,6 +320,7 @@ static void etm_event_stop(struct perf_event *event, int mode)
struct coresight_device *sink, *csdev = per_cpu(csdev_src, cpu);
struct perf_output_handle *handle = this_cpu_ptr(&ctx_handle);
struct etm_event_data *event_data = perf_get_aux(handle);
struct list_head *path;
if (event->hw.state == PERF_HES_STOPPED)
return;
@ -313,7 +328,11 @@ static void etm_event_stop(struct perf_event *event, int mode)
if (!csdev)
return;
sink = coresight_get_sink(event_data->path[cpu]);
path = etm_event_cpu_path(event_data, cpu);
if (!path)
return;
sink = coresight_get_sink(path);
if (!sink)
return;
@ -331,20 +350,13 @@ static void etm_event_stop(struct perf_event *event, int mode)
if (!sink_ops(sink)->update_buffer)
return;
sink_ops(sink)->update_buffer(sink, handle,
size = sink_ops(sink)->update_buffer(sink, handle,
event_data->snk_config);
if (!sink_ops(sink)->reset_buffer)
return;
size = sink_ops(sink)->reset_buffer(sink, handle,
event_data->snk_config);
perf_aux_output_end(handle, size);
}
/* Disabling the path make its elements available to other sessions */
coresight_disable_path(event_data->path[cpu]);
coresight_disable_path(path);
}
static int etm_event_add(struct perf_event *event, int mode)

View File

@ -7,6 +7,7 @@
#ifndef _CORESIGHT_ETM_PERF_H
#define _CORESIGHT_ETM_PERF_H
#include <linux/percpu-defs.h>
#include "coresight-priv.h"
struct coresight_device;
@ -42,14 +43,39 @@ struct etm_filters {
bool ssstatus;
};
/**
* struct etm_event_data - Coresight specifics associated to an event
* @work: Handle to free allocated memory outside IRQ context.
* @mask: Hold the CPU(s) this event was set for.
* @snk_config: The sink configuration.
* @path: An array of path, each slot for one CPU.
*/
struct etm_event_data {
struct work_struct work;
cpumask_t mask;
void *snk_config;
struct list_head * __percpu *path;
};
#ifdef CONFIG_CORESIGHT
int etm_perf_symlink(struct coresight_device *csdev, bool link);
static inline void *etm_perf_sink_config(struct perf_output_handle *handle)
{
struct etm_event_data *data = perf_get_aux(handle);
if (data)
return data->snk_config;
return NULL;
}
#else
static inline int etm_perf_symlink(struct coresight_device *csdev, bool link)
{ return -EINVAL; }
static inline void *etm_perf_sink_config(struct perf_output_handle *handle)
{
return NULL;
}
#endif /* CONFIG_CORESIGHT */
#endif

View File

@ -355,11 +355,10 @@ static int etm_parse_event_config(struct etm_drvdata *drvdata,
return 0;
}
static void etm_enable_hw(void *info)
static int etm_enable_hw(struct etm_drvdata *drvdata)
{
int i;
int i, rc;
u32 etmcr;
struct etm_drvdata *drvdata = info;
struct etm_config *config = &drvdata->config;
CS_UNLOCK(drvdata->base);
@ -370,6 +369,9 @@ static void etm_enable_hw(void *info)
etm_set_pwrup(drvdata);
/* Make sure all registers are accessible */
etm_os_unlock(drvdata);
rc = coresight_claim_device_unlocked(drvdata->base);
if (rc)
goto done;
etm_set_prog(drvdata);
@ -418,9 +420,29 @@ static void etm_enable_hw(void *info)
etm_writel(drvdata, 0x0, ETMVMIDCVR);
etm_clr_prog(drvdata);
done:
if (rc)
etm_set_pwrdwn(drvdata);
CS_LOCK(drvdata->base);
dev_dbg(drvdata->dev, "cpu: %d enable smp call done\n", drvdata->cpu);
dev_dbg(drvdata->dev, "cpu: %d enable smp call done: %d\n",
drvdata->cpu, rc);
return rc;
}
struct etm_enable_arg {
struct etm_drvdata *drvdata;
int rc;
};
static void etm_enable_hw_smp_call(void *info)
{
struct etm_enable_arg *arg = info;
if (WARN_ON(!arg))
return;
arg->rc = etm_enable_hw(arg->drvdata);
}
static int etm_cpu_id(struct coresight_device *csdev)
@ -475,14 +497,13 @@ static int etm_enable_perf(struct coresight_device *csdev,
/* Configure the tracer based on the session's specifics */
etm_parse_event_config(drvdata, event);
/* And enable it */
etm_enable_hw(drvdata);
return 0;
return etm_enable_hw(drvdata);
}
static int etm_enable_sysfs(struct coresight_device *csdev)
{
struct etm_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
struct etm_enable_arg arg = { 0 };
int ret;
spin_lock(&drvdata->spinlock);
@ -492,20 +513,21 @@ static int etm_enable_sysfs(struct coresight_device *csdev)
* hw configuration will take place on the local CPU during bring up.
*/
if (cpu_online(drvdata->cpu)) {
arg.drvdata = drvdata;
ret = smp_call_function_single(drvdata->cpu,
etm_enable_hw, drvdata, 1);
if (ret)
goto err;
etm_enable_hw_smp_call, &arg, 1);
if (!ret)
ret = arg.rc;
if (!ret)
drvdata->sticky_enable = true;
} else {
ret = -ENODEV;
}
drvdata->sticky_enable = true;
spin_unlock(&drvdata->spinlock);
dev_info(drvdata->dev, "ETM tracing enabled\n");
return 0;
err:
spin_unlock(&drvdata->spinlock);
if (!ret)
dev_dbg(drvdata->dev, "ETM tracing enabled\n");
return ret;
}
@ -555,6 +577,8 @@ static void etm_disable_hw(void *info)
for (i = 0; i < drvdata->nr_cntr; i++)
config->cntr_val[i] = etm_readl(drvdata, ETMCNTVRn(i));
coresight_disclaim_device_unlocked(drvdata->base);
etm_set_pwrdwn(drvdata);
CS_LOCK(drvdata->base);
@ -604,7 +628,7 @@ static void etm_disable_sysfs(struct coresight_device *csdev)
spin_unlock(&drvdata->spinlock);
cpus_read_unlock();
dev_info(drvdata->dev, "ETM tracing disabled\n");
dev_dbg(drvdata->dev, "ETM tracing disabled\n");
}
static void etm_disable(struct coresight_device *csdev,

View File

@ -28,6 +28,7 @@
#include <linux/pm_runtime.h>
#include <asm/sections.h>
#include <asm/local.h>
#include <asm/virt.h>
#include "coresight-etm4x.h"
#include "coresight-etm-perf.h"
@ -77,16 +78,24 @@ static int etm4_trace_id(struct coresight_device *csdev)
return drvdata->trcid;
}
static void etm4_enable_hw(void *info)
struct etm4_enable_arg {
struct etmv4_drvdata *drvdata;
int rc;
};
static int etm4_enable_hw(struct etmv4_drvdata *drvdata)
{
int i;
struct etmv4_drvdata *drvdata = info;
int i, rc;
struct etmv4_config *config = &drvdata->config;
CS_UNLOCK(drvdata->base);
etm4_os_unlock(drvdata);
rc = coresight_claim_device_unlocked(drvdata->base);
if (rc)
goto done;
/* Disable the trace unit before programming trace registers */
writel_relaxed(0, drvdata->base + TRCPRGCTLR);
@ -174,9 +183,21 @@ static void etm4_enable_hw(void *info)
dev_err(drvdata->dev,
"timeout while waiting for Idle Trace Status\n");
done:
CS_LOCK(drvdata->base);
dev_dbg(drvdata->dev, "cpu: %d enable smp call done\n", drvdata->cpu);
dev_dbg(drvdata->dev, "cpu: %d enable smp call done: %d\n",
drvdata->cpu, rc);
return rc;
}
static void etm4_enable_hw_smp_call(void *info)
{
struct etm4_enable_arg *arg = info;
if (WARN_ON(!arg))
return;
arg->rc = etm4_enable_hw(arg->drvdata);
}
static int etm4_parse_event_config(struct etmv4_drvdata *drvdata,
@ -242,7 +263,7 @@ static int etm4_enable_perf(struct coresight_device *csdev,
if (ret)
goto out;
/* And enable it */
etm4_enable_hw(drvdata);
ret = etm4_enable_hw(drvdata);
out:
return ret;
@ -251,6 +272,7 @@ out:
static int etm4_enable_sysfs(struct coresight_device *csdev)
{
struct etmv4_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
struct etm4_enable_arg arg = { 0 };
int ret;
spin_lock(&drvdata->spinlock);
@ -259,19 +281,17 @@ static int etm4_enable_sysfs(struct coresight_device *csdev)
* Executing etm4_enable_hw on the cpu whose ETM is being enabled
* ensures that register writes occur when cpu is powered.
*/
arg.drvdata = drvdata;
ret = smp_call_function_single(drvdata->cpu,
etm4_enable_hw, drvdata, 1);
if (ret)
goto err;
drvdata->sticky_enable = true;
etm4_enable_hw_smp_call, &arg, 1);
if (!ret)
ret = arg.rc;
if (!ret)
drvdata->sticky_enable = true;
spin_unlock(&drvdata->spinlock);
dev_info(drvdata->dev, "ETM tracing enabled\n");
return 0;
err:
spin_unlock(&drvdata->spinlock);
if (!ret)
dev_dbg(drvdata->dev, "ETM tracing enabled\n");
return ret;
}
@ -328,6 +348,8 @@ static void etm4_disable_hw(void *info)
isb();
writel_relaxed(control, drvdata->base + TRCPRGCTLR);
coresight_disclaim_device_unlocked(drvdata->base);
CS_LOCK(drvdata->base);
dev_dbg(drvdata->dev, "cpu: %d disable smp call done\n", drvdata->cpu);
@ -380,7 +402,7 @@ static void etm4_disable_sysfs(struct coresight_device *csdev)
spin_unlock(&drvdata->spinlock);
cpus_read_unlock();
dev_info(drvdata->dev, "ETM tracing disabled\n");
dev_dbg(drvdata->dev, "ETM tracing disabled\n");
}
static void etm4_disable(struct coresight_device *csdev,
@ -605,7 +627,7 @@ static void etm4_set_default_config(struct etmv4_config *config)
config->vinst_ctrl |= BIT(0);
}
static u64 etm4_get_access_type(struct etmv4_config *config)
static u64 etm4_get_ns_access_type(struct etmv4_config *config)
{
u64 access_type = 0;
@ -616,17 +638,26 @@ static u64 etm4_get_access_type(struct etmv4_config *config)
* Bit[13] Exception level 1 - OS
* Bit[14] Exception level 2 - Hypervisor
* Bit[15] Never implemented
*
* Always stay away from hypervisor mode.
*/
access_type = ETM_EXLEVEL_NS_HYP;
if (config->mode & ETM_MODE_EXCL_KERN)
access_type |= ETM_EXLEVEL_NS_OS;
if (!is_kernel_in_hyp_mode()) {
/* Stay away from hypervisor mode for non-VHE */
access_type = ETM_EXLEVEL_NS_HYP;
if (config->mode & ETM_MODE_EXCL_KERN)
access_type |= ETM_EXLEVEL_NS_OS;
} else if (config->mode & ETM_MODE_EXCL_KERN) {
access_type = ETM_EXLEVEL_NS_HYP;
}
if (config->mode & ETM_MODE_EXCL_USER)
access_type |= ETM_EXLEVEL_NS_APP;
return access_type;
}
static u64 etm4_get_access_type(struct etmv4_config *config)
{
u64 access_type = etm4_get_ns_access_type(config);
/*
* EXLEVEL_S, bits[11:8], don't trace anything happening
* in secure state.
@ -880,20 +911,10 @@ void etm4_config_trace_mode(struct etmv4_config *config)
addr_acc = config->addr_acc[ETM_DEFAULT_ADDR_COMP];
/* clear default config */
addr_acc &= ~(ETM_EXLEVEL_NS_APP | ETM_EXLEVEL_NS_OS);
addr_acc &= ~(ETM_EXLEVEL_NS_APP | ETM_EXLEVEL_NS_OS |
ETM_EXLEVEL_NS_HYP);
/*
* EXLEVEL_NS, bits[15:12]
* The Exception levels are:
* Bit[12] Exception level 0 - Application
* Bit[13] Exception level 1 - OS
* Bit[14] Exception level 2 - Hypervisor
* Bit[15] Never implemented
*/
if (mode & ETM_MODE_EXCL_KERN)
addr_acc |= ETM_EXLEVEL_NS_OS;
else
addr_acc |= ETM_EXLEVEL_NS_APP;
addr_acc |= etm4_get_ns_access_type(config);
config->addr_acc[ETM_DEFAULT_ADDR_COMP] = addr_acc;
config->addr_acc[ETM_DEFAULT_ADDR_COMP + 1] = addr_acc;

View File

@ -25,6 +25,7 @@
#define FUNNEL_HOLDTIME_MASK 0xf00
#define FUNNEL_HOLDTIME_SHFT 0x8
#define FUNNEL_HOLDTIME (0x7 << FUNNEL_HOLDTIME_SHFT)
#define FUNNEL_ENSx_MASK 0xff
/**
* struct funnel_drvdata - specifics associated to a funnel component
@ -42,31 +43,42 @@ struct funnel_drvdata {
unsigned long priority;
};
static void funnel_enable_hw(struct funnel_drvdata *drvdata, int port)
static int funnel_enable_hw(struct funnel_drvdata *drvdata, int port)
{
u32 functl;
int rc = 0;
CS_UNLOCK(drvdata->base);
functl = readl_relaxed(drvdata->base + FUNNEL_FUNCTL);
/* Claim the device only when we enable the first slave */
if (!(functl & FUNNEL_ENSx_MASK)) {
rc = coresight_claim_device_unlocked(drvdata->base);
if (rc)
goto done;
}
functl &= ~FUNNEL_HOLDTIME_MASK;
functl |= FUNNEL_HOLDTIME;
functl |= (1 << port);
writel_relaxed(functl, drvdata->base + FUNNEL_FUNCTL);
writel_relaxed(drvdata->priority, drvdata->base + FUNNEL_PRICTL);
done:
CS_LOCK(drvdata->base);
return rc;
}
static int funnel_enable(struct coresight_device *csdev, int inport,
int outport)
{
int rc;
struct funnel_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
funnel_enable_hw(drvdata, inport);
rc = funnel_enable_hw(drvdata, inport);
dev_info(drvdata->dev, "FUNNEL inport %d enabled\n", inport);
return 0;
if (!rc)
dev_dbg(drvdata->dev, "FUNNEL inport %d enabled\n", inport);
return rc;
}
static void funnel_disable_hw(struct funnel_drvdata *drvdata, int inport)
@ -79,6 +91,10 @@ static void funnel_disable_hw(struct funnel_drvdata *drvdata, int inport)
functl &= ~(1 << inport);
writel_relaxed(functl, drvdata->base + FUNNEL_FUNCTL);
/* Disclaim the device if none of the slaves are now active */
if (!(functl & FUNNEL_ENSx_MASK))
coresight_disclaim_device_unlocked(drvdata->base);
CS_LOCK(drvdata->base);
}
@ -89,7 +105,7 @@ static void funnel_disable(struct coresight_device *csdev, int inport,
funnel_disable_hw(drvdata, inport);
dev_info(drvdata->dev, "FUNNEL inport %d disabled\n", inport);
dev_dbg(drvdata->dev, "FUNNEL inport %d disabled\n", inport);
}
static const struct coresight_ops_link funnel_link_ops = {

View File

@ -25,6 +25,13 @@
#define CORESIGHT_DEVID 0xfc8
#define CORESIGHT_DEVTYPE 0xfcc
/*
* Coresight device CLAIM protocol.
* See PSCI - ARM DEN 0022D, Section: 6.8.1 Debug and Trace save and restore.
*/
#define CORESIGHT_CLAIM_SELF_HOSTED BIT(1)
#define TIMEOUT_US 100
#define BMVAL(val, lsb, msb) ((val & GENMASK(msb, lsb)) >> lsb)
@ -137,7 +144,7 @@ static inline void coresight_write_reg_pair(void __iomem *addr, u64 val,
}
void coresight_disable_path(struct list_head *path);
int coresight_enable_path(struct list_head *path, u32 mode);
int coresight_enable_path(struct list_head *path, u32 mode, void *sink_data);
struct coresight_device *coresight_get_sink(struct list_head *path);
struct coresight_device *coresight_get_enabled_sink(bool reset);
struct list_head *coresight_build_path(struct coresight_device *csdev,

View File

@ -35,7 +35,7 @@ static int replicator_enable(struct coresight_device *csdev, int inport,
{
struct replicator_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
dev_info(drvdata->dev, "REPLICATOR enabled\n");
dev_dbg(drvdata->dev, "REPLICATOR enabled\n");
return 0;
}
@ -44,7 +44,7 @@ static void replicator_disable(struct coresight_device *csdev, int inport,
{
struct replicator_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
dev_info(drvdata->dev, "REPLICATOR disabled\n");
dev_dbg(drvdata->dev, "REPLICATOR disabled\n");
}
static const struct coresight_ops_link replicator_link_ops = {

View File

@ -211,7 +211,7 @@ static int stm_enable(struct coresight_device *csdev,
stm_enable_hw(drvdata);
spin_unlock(&drvdata->spinlock);
dev_info(drvdata->dev, "STM tracing enabled\n");
dev_dbg(drvdata->dev, "STM tracing enabled\n");
return 0;
}
@ -274,7 +274,7 @@ static void stm_disable(struct coresight_device *csdev,
pm_runtime_put(drvdata->dev);
local_set(&drvdata->mode, CS_MODE_DISABLED);
dev_info(drvdata->dev, "STM tracing disabled\n");
dev_dbg(drvdata->dev, "STM tracing disabled\n");
}
}

View File

@ -10,8 +10,12 @@
#include <linux/slab.h>
#include "coresight-priv.h"
#include "coresight-tmc.h"
#include "coresight-etm-perf.h"
static void tmc_etb_enable_hw(struct tmc_drvdata *drvdata)
static int tmc_set_etf_buffer(struct coresight_device *csdev,
struct perf_output_handle *handle);
static void __tmc_etb_enable_hw(struct tmc_drvdata *drvdata)
{
CS_UNLOCK(drvdata->base);
@ -30,33 +34,41 @@ static void tmc_etb_enable_hw(struct tmc_drvdata *drvdata)
CS_LOCK(drvdata->base);
}
static int tmc_etb_enable_hw(struct tmc_drvdata *drvdata)
{
int rc = coresight_claim_device(drvdata->base);
if (rc)
return rc;
__tmc_etb_enable_hw(drvdata);
return 0;
}
static void tmc_etb_dump_hw(struct tmc_drvdata *drvdata)
{
char *bufp;
u32 read_data, lost;
int i;
/* Check if the buffer wrapped around. */
lost = readl_relaxed(drvdata->base + TMC_STS) & TMC_STS_FULL;
bufp = drvdata->buf;
drvdata->len = 0;
while (1) {
for (i = 0; i < drvdata->memwidth; i++) {
read_data = readl_relaxed(drvdata->base + TMC_RRD);
if (read_data == 0xFFFFFFFF)
goto done;
memcpy(bufp, &read_data, 4);
bufp += 4;
drvdata->len += 4;
}
read_data = readl_relaxed(drvdata->base + TMC_RRD);
if (read_data == 0xFFFFFFFF)
break;
memcpy(bufp, &read_data, 4);
bufp += 4;
drvdata->len += 4;
}
done:
if (lost)
coresight_insert_barrier_packet(drvdata->buf);
return;
}
static void tmc_etb_disable_hw(struct tmc_drvdata *drvdata)
static void __tmc_etb_disable_hw(struct tmc_drvdata *drvdata)
{
CS_UNLOCK(drvdata->base);
@ -72,7 +84,13 @@ static void tmc_etb_disable_hw(struct tmc_drvdata *drvdata)
CS_LOCK(drvdata->base);
}
static void tmc_etf_enable_hw(struct tmc_drvdata *drvdata)
static void tmc_etb_disable_hw(struct tmc_drvdata *drvdata)
{
coresight_disclaim_device(drvdata);
__tmc_etb_disable_hw(drvdata);
}
static void __tmc_etf_enable_hw(struct tmc_drvdata *drvdata)
{
CS_UNLOCK(drvdata->base);
@ -88,13 +106,24 @@ static void tmc_etf_enable_hw(struct tmc_drvdata *drvdata)
CS_LOCK(drvdata->base);
}
static int tmc_etf_enable_hw(struct tmc_drvdata *drvdata)
{
int rc = coresight_claim_device(drvdata->base);
if (rc)
return rc;
__tmc_etf_enable_hw(drvdata);
return 0;
}
static void tmc_etf_disable_hw(struct tmc_drvdata *drvdata)
{
CS_UNLOCK(drvdata->base);
tmc_flush_and_stop(drvdata);
tmc_disable_hw(drvdata);
coresight_disclaim_device_unlocked(drvdata->base);
CS_LOCK(drvdata->base);
}
@ -170,8 +199,12 @@ static int tmc_enable_etf_sink_sysfs(struct coresight_device *csdev)
drvdata->buf = buf;
}
drvdata->mode = CS_MODE_SYSFS;
tmc_etb_enable_hw(drvdata);
ret = tmc_etb_enable_hw(drvdata);
if (!ret)
drvdata->mode = CS_MODE_SYSFS;
else
/* Free up the buffer if we failed to enable */
used = false;
out:
spin_unlock_irqrestore(&drvdata->spinlock, flags);
@ -182,37 +215,40 @@ out:
return ret;
}
static int tmc_enable_etf_sink_perf(struct coresight_device *csdev)
static int tmc_enable_etf_sink_perf(struct coresight_device *csdev, void *data)
{
int ret = 0;
unsigned long flags;
struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
struct perf_output_handle *handle = data;
spin_lock_irqsave(&drvdata->spinlock, flags);
if (drvdata->reading) {
do {
ret = -EINVAL;
goto out;
}
if (drvdata->reading)
break;
/*
* In Perf mode there can be only one writer per sink. There
* is also no need to continue if the ETB/ETF is already
* operated from sysFS.
*/
if (drvdata->mode != CS_MODE_DISABLED)
break;
/*
* In Perf mode there can be only one writer per sink. There
* is also no need to continue if the ETB/ETR is already operated
* from sysFS.
*/
if (drvdata->mode != CS_MODE_DISABLED) {
ret = -EINVAL;
goto out;
}
drvdata->mode = CS_MODE_PERF;
tmc_etb_enable_hw(drvdata);
out:
ret = tmc_set_etf_buffer(csdev, handle);
if (ret)
break;
ret = tmc_etb_enable_hw(drvdata);
if (!ret)
drvdata->mode = CS_MODE_PERF;
} while (0);
spin_unlock_irqrestore(&drvdata->spinlock, flags);
return ret;
}
static int tmc_enable_etf_sink(struct coresight_device *csdev, u32 mode)
static int tmc_enable_etf_sink(struct coresight_device *csdev,
u32 mode, void *data)
{
int ret;
struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
@ -222,7 +258,7 @@ static int tmc_enable_etf_sink(struct coresight_device *csdev, u32 mode)
ret = tmc_enable_etf_sink_sysfs(csdev);
break;
case CS_MODE_PERF:
ret = tmc_enable_etf_sink_perf(csdev);
ret = tmc_enable_etf_sink_perf(csdev, data);
break;
/* We shouldn't be here */
default:
@ -233,7 +269,7 @@ static int tmc_enable_etf_sink(struct coresight_device *csdev, u32 mode)
if (ret)
return ret;
dev_info(drvdata->dev, "TMC-ETB/ETF enabled\n");
dev_dbg(drvdata->dev, "TMC-ETB/ETF enabled\n");
return 0;
}
@ -256,12 +292,13 @@ static void tmc_disable_etf_sink(struct coresight_device *csdev)
spin_unlock_irqrestore(&drvdata->spinlock, flags);
dev_info(drvdata->dev, "TMC-ETB/ETF disabled\n");
dev_dbg(drvdata->dev, "TMC-ETB/ETF disabled\n");
}
static int tmc_enable_etf_link(struct coresight_device *csdev,
int inport, int outport)
{
int ret;
unsigned long flags;
struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
@ -271,12 +308,14 @@ static int tmc_enable_etf_link(struct coresight_device *csdev,
return -EBUSY;
}
tmc_etf_enable_hw(drvdata);
drvdata->mode = CS_MODE_SYSFS;
ret = tmc_etf_enable_hw(drvdata);
if (!ret)
drvdata->mode = CS_MODE_SYSFS;
spin_unlock_irqrestore(&drvdata->spinlock, flags);
dev_info(drvdata->dev, "TMC-ETF enabled\n");
return 0;
if (!ret)
dev_dbg(drvdata->dev, "TMC-ETF enabled\n");
return ret;
}
static void tmc_disable_etf_link(struct coresight_device *csdev,
@ -295,7 +334,7 @@ static void tmc_disable_etf_link(struct coresight_device *csdev,
drvdata->mode = CS_MODE_DISABLED;
spin_unlock_irqrestore(&drvdata->spinlock, flags);
dev_info(drvdata->dev, "TMC-ETF disabled\n");
dev_dbg(drvdata->dev, "TMC-ETF disabled\n");
}
static void *tmc_alloc_etf_buffer(struct coresight_device *csdev, int cpu,
@ -328,12 +367,14 @@ static void tmc_free_etf_buffer(void *config)
}
static int tmc_set_etf_buffer(struct coresight_device *csdev,
struct perf_output_handle *handle,
void *sink_config)
struct perf_output_handle *handle)
{
int ret = 0;
unsigned long head;
struct cs_buffers *buf = sink_config;
struct cs_buffers *buf = etm_perf_sink_config(handle);
if (!buf)
return -EINVAL;
/* wrap head around to the amount of space we have */
head = handle->head & ((buf->nr_pages << PAGE_SHIFT) - 1);
@ -349,36 +390,7 @@ static int tmc_set_etf_buffer(struct coresight_device *csdev,
return ret;
}
static unsigned long tmc_reset_etf_buffer(struct coresight_device *csdev,
struct perf_output_handle *handle,
void *sink_config)
{
long size = 0;
struct cs_buffers *buf = sink_config;
if (buf) {
/*
* In snapshot mode ->data_size holds the new address of the
* ring buffer's head. The size itself is the whole address
* range since we want the latest information.
*/
if (buf->snapshot)
handle->head = local_xchg(&buf->data_size,
buf->nr_pages << PAGE_SHIFT);
/*
* Tell the tracer PMU how much we got in this run and if
* something went wrong along the way. Nobody else can use
* this cs_buffers instance until we are done. As such
* resetting parameters here and squaring off with the ring
* buffer API in the tracer PMU is fine.
*/
size = local_xchg(&buf->data_size, 0);
}
return size;
}
static void tmc_update_etf_buffer(struct coresight_device *csdev,
static unsigned long tmc_update_etf_buffer(struct coresight_device *csdev,
struct perf_output_handle *handle,
void *sink_config)
{
@ -387,17 +399,17 @@ static void tmc_update_etf_buffer(struct coresight_device *csdev,
const u32 *barrier;
u32 *buf_ptr;
u64 read_ptr, write_ptr;
u32 status, to_read;
unsigned long offset;
u32 status;
unsigned long offset, to_read;
struct cs_buffers *buf = sink_config;
struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
if (!buf)
return;
return 0;
/* This shouldn't happen */
if (WARN_ON_ONCE(drvdata->mode != CS_MODE_PERF))
return;
return 0;
CS_UNLOCK(drvdata->base);
@ -438,10 +450,10 @@ static void tmc_update_etf_buffer(struct coresight_device *csdev,
case TMC_MEM_INTF_WIDTH_32BITS:
case TMC_MEM_INTF_WIDTH_64BITS:
case TMC_MEM_INTF_WIDTH_128BITS:
mask = GENMASK(31, 5);
mask = GENMASK(31, 4);
break;
case TMC_MEM_INTF_WIDTH_256BITS:
mask = GENMASK(31, 6);
mask = GENMASK(31, 5);
break;
}
@ -486,18 +498,14 @@ static void tmc_update_etf_buffer(struct coresight_device *csdev,
}
}
/*
* In snapshot mode all we have to do is communicate to
* perf_aux_output_end() the address of the current head. In full
* trace mode the same function expects a size to move rb->aux_head
* forward.
*/
if (buf->snapshot)
local_set(&buf->data_size, (cur * PAGE_SIZE) + offset);
else
local_add(to_read, &buf->data_size);
/* In snapshot mode we have to update the head */
if (buf->snapshot) {
handle->head = (cur * PAGE_SIZE) + offset;
to_read = buf->nr_pages << PAGE_SHIFT;
}
CS_LOCK(drvdata->base);
return to_read;
}
static const struct coresight_ops_sink tmc_etf_sink_ops = {
@ -505,8 +513,6 @@ static const struct coresight_ops_sink tmc_etf_sink_ops = {
.disable = tmc_disable_etf_sink,
.alloc_buffer = tmc_alloc_etf_buffer,
.free_buffer = tmc_free_etf_buffer,
.set_buffer = tmc_set_etf_buffer,
.reset_buffer = tmc_reset_etf_buffer,
.update_buffer = tmc_update_etf_buffer,
};
@ -563,7 +569,7 @@ int tmc_read_prepare_etb(struct tmc_drvdata *drvdata)
/* Disable the TMC if need be */
if (drvdata->mode == CS_MODE_SYSFS)
tmc_etb_disable_hw(drvdata);
__tmc_etb_disable_hw(drvdata);
drvdata->reading = true;
out:
@ -603,7 +609,7 @@ int tmc_read_unprepare_etb(struct tmc_drvdata *drvdata)
* can't be NULL.
*/
memset(drvdata->buf, 0, drvdata->size);
tmc_etb_enable_hw(drvdata);
__tmc_etb_enable_hw(drvdata);
} else {
/*
* The ETB/ETF is not tracing and the buffer was just read.

View File

@ -10,6 +10,7 @@
#include <linux/slab.h>
#include <linux/vmalloc.h>
#include "coresight-catu.h"
#include "coresight-etm-perf.h"
#include "coresight-priv.h"
#include "coresight-tmc.h"
@ -20,6 +21,28 @@ struct etr_flat_buf {
size_t size;
};
/*
* etr_perf_buffer - Perf buffer used for ETR
* @etr_buf - Actual buffer used by the ETR
* @snaphost - Perf session mode
* @head - handle->head at the beginning of the session.
* @nr_pages - Number of pages in the ring buffer.
* @pages - Array of Pages in the ring buffer.
*/
struct etr_perf_buffer {
struct etr_buf *etr_buf;
bool snapshot;
unsigned long head;
int nr_pages;
void **pages;
};
/* Convert the perf index to an offset within the ETR buffer */
#define PERF_IDX2OFF(idx, buf) ((idx) % ((buf)->nr_pages << PAGE_SHIFT))
/* Lower limit for ETR hardware buffer */
#define TMC_ETR_PERF_MIN_BUF_SIZE SZ_1M
/*
* The TMC ETR SG has a page size of 4K. The SG table contains pointers
* to 4KB buffers. However, the OS may use a PAGE_SIZE different from
@ -536,7 +559,7 @@ tmc_init_etr_sg_table(struct device *dev, int node,
sg_table = tmc_alloc_sg_table(dev, node, nr_tpages, nr_dpages, pages);
if (IS_ERR(sg_table)) {
kfree(etr_table);
return ERR_PTR(PTR_ERR(sg_table));
return ERR_CAST(sg_table);
}
etr_table->sg_table = sg_table;
@ -728,12 +751,14 @@ tmc_etr_get_catu_device(struct tmc_drvdata *drvdata)
return NULL;
}
static inline void tmc_etr_enable_catu(struct tmc_drvdata *drvdata)
static inline int tmc_etr_enable_catu(struct tmc_drvdata *drvdata,
struct etr_buf *etr_buf)
{
struct coresight_device *catu = tmc_etr_get_catu_device(drvdata);
if (catu && helper_ops(catu)->enable)
helper_ops(catu)->enable(catu, drvdata->etr_buf);
return helper_ops(catu)->enable(catu, etr_buf);
return 0;
}
static inline void tmc_etr_disable_catu(struct tmc_drvdata *drvdata)
@ -895,17 +920,11 @@ static void tmc_sync_etr_buf(struct tmc_drvdata *drvdata)
tmc_etr_buf_insert_barrier_packet(etr_buf, etr_buf->offset);
}
static void tmc_etr_enable_hw(struct tmc_drvdata *drvdata)
static void __tmc_etr_enable_hw(struct tmc_drvdata *drvdata)
{
u32 axictl, sts;
struct etr_buf *etr_buf = drvdata->etr_buf;
/*
* If this ETR is connected to a CATU, enable it before we turn
* this on
*/
tmc_etr_enable_catu(drvdata);
CS_UNLOCK(drvdata->base);
/* Wait for TMCSReady bit to be set */
@ -924,11 +943,8 @@ static void tmc_etr_enable_hw(struct tmc_drvdata *drvdata)
axictl |= TMC_AXICTL_ARCACHE_OS;
}
if (etr_buf->mode == ETR_MODE_ETR_SG) {
if (WARN_ON(!tmc_etr_has_cap(drvdata, TMC_ETR_SG)))
return;
if (etr_buf->mode == ETR_MODE_ETR_SG)
axictl |= TMC_AXICTL_SCT_GAT_MODE;
}
writel_relaxed(axictl, drvdata->base + TMC_AXICTL);
tmc_write_dba(drvdata, etr_buf->hwaddr);
@ -954,19 +970,54 @@ static void tmc_etr_enable_hw(struct tmc_drvdata *drvdata)
CS_LOCK(drvdata->base);
}
static int tmc_etr_enable_hw(struct tmc_drvdata *drvdata,
struct etr_buf *etr_buf)
{
int rc;
/* Callers should provide an appropriate buffer for use */
if (WARN_ON(!etr_buf))
return -EINVAL;
if ((etr_buf->mode == ETR_MODE_ETR_SG) &&
WARN_ON(!tmc_etr_has_cap(drvdata, TMC_ETR_SG)))
return -EINVAL;
if (WARN_ON(drvdata->etr_buf))
return -EBUSY;
/*
* If this ETR is connected to a CATU, enable it before we turn
* this on.
*/
rc = tmc_etr_enable_catu(drvdata, etr_buf);
if (rc)
return rc;
rc = coresight_claim_device(drvdata->base);
if (!rc) {
drvdata->etr_buf = etr_buf;
__tmc_etr_enable_hw(drvdata);
}
return rc;
}
/*
* Return the available trace data in the buffer (starts at etr_buf->offset,
* limited by etr_buf->len) from @pos, with a maximum limit of @len,
* also updating the @bufpp on where to find it. Since the trace data
* starts at anywhere in the buffer, depending on the RRP, we adjust the
* @len returned to handle buffer wrapping around.
*
* We are protected here by drvdata->reading != 0, which ensures the
* sysfs_buf stays alive.
*/
ssize_t tmc_etr_get_sysfs_trace(struct tmc_drvdata *drvdata,
loff_t pos, size_t len, char **bufpp)
{
s64 offset;
ssize_t actual = len;
struct etr_buf *etr_buf = drvdata->etr_buf;
struct etr_buf *etr_buf = drvdata->sysfs_buf;
if (pos + actual > etr_buf->len)
actual = etr_buf->len - pos;
@ -996,10 +1047,17 @@ tmc_etr_free_sysfs_buf(struct etr_buf *buf)
static void tmc_etr_sync_sysfs_buf(struct tmc_drvdata *drvdata)
{
tmc_sync_etr_buf(drvdata);
struct etr_buf *etr_buf = drvdata->etr_buf;
if (WARN_ON(drvdata->sysfs_buf != etr_buf)) {
tmc_etr_free_sysfs_buf(drvdata->sysfs_buf);
drvdata->sysfs_buf = NULL;
} else {
tmc_sync_etr_buf(drvdata);
}
}
static void tmc_etr_disable_hw(struct tmc_drvdata *drvdata)
static void __tmc_etr_disable_hw(struct tmc_drvdata *drvdata)
{
CS_UNLOCK(drvdata->base);
@ -1015,8 +1073,16 @@ static void tmc_etr_disable_hw(struct tmc_drvdata *drvdata)
CS_LOCK(drvdata->base);
}
static void tmc_etr_disable_hw(struct tmc_drvdata *drvdata)
{
__tmc_etr_disable_hw(drvdata);
/* Disable CATU device if this ETR is connected to one */
tmc_etr_disable_catu(drvdata);
coresight_disclaim_device(drvdata->base);
/* Reset the ETR buf used by hardware */
drvdata->etr_buf = NULL;
}
static int tmc_enable_etr_sink_sysfs(struct coresight_device *csdev)
@ -1024,7 +1090,7 @@ static int tmc_enable_etr_sink_sysfs(struct coresight_device *csdev)
int ret = 0;
unsigned long flags;
struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
struct etr_buf *new_buf = NULL, *free_buf = NULL;
struct etr_buf *sysfs_buf = NULL, *new_buf = NULL, *free_buf = NULL;
/*
* If we are enabling the ETR from disabled state, we need to make
@ -1035,7 +1101,8 @@ static int tmc_enable_etr_sink_sysfs(struct coresight_device *csdev)
* with the lock released.
*/
spin_lock_irqsave(&drvdata->spinlock, flags);
if (!drvdata->etr_buf || (drvdata->etr_buf->size != drvdata->size)) {
sysfs_buf = READ_ONCE(drvdata->sysfs_buf);
if (!sysfs_buf || (sysfs_buf->size != drvdata->size)) {
spin_unlock_irqrestore(&drvdata->spinlock, flags);
/* Allocate memory with the locks released */
@ -1064,14 +1131,15 @@ static int tmc_enable_etr_sink_sysfs(struct coresight_device *csdev)
* If we don't have a buffer or it doesn't match the requested size,
* use the buffer allocated above. Otherwise reuse the existing buffer.
*/
if (!drvdata->etr_buf ||
(new_buf && drvdata->etr_buf->size != new_buf->size)) {
free_buf = drvdata->etr_buf;
drvdata->etr_buf = new_buf;
sysfs_buf = READ_ONCE(drvdata->sysfs_buf);
if (!sysfs_buf || (new_buf && sysfs_buf->size != new_buf->size)) {
free_buf = sysfs_buf;
drvdata->sysfs_buf = new_buf;
}
drvdata->mode = CS_MODE_SYSFS;
tmc_etr_enable_hw(drvdata);
ret = tmc_etr_enable_hw(drvdata, drvdata->sysfs_buf);
if (!ret)
drvdata->mode = CS_MODE_SYSFS;
out:
spin_unlock_irqrestore(&drvdata->spinlock, flags);
@ -1080,24 +1148,244 @@ out:
tmc_etr_free_sysfs_buf(free_buf);
if (!ret)
dev_info(drvdata->dev, "TMC-ETR enabled\n");
dev_dbg(drvdata->dev, "TMC-ETR enabled\n");
return ret;
}
static int tmc_enable_etr_sink_perf(struct coresight_device *csdev)
/*
* tmc_etr_setup_perf_buf: Allocate ETR buffer for use by perf.
* The size of the hardware buffer is dependent on the size configured
* via sysfs and the perf ring buffer size. We prefer to allocate the
* largest possible size, scaling down the size by half until it
* reaches a minimum limit (1M), beyond which we give up.
*/
static struct etr_perf_buffer *
tmc_etr_setup_perf_buf(struct tmc_drvdata *drvdata, int node, int nr_pages,
void **pages, bool snapshot)
{
/* We don't support perf mode yet ! */
return -EINVAL;
struct etr_buf *etr_buf;
struct etr_perf_buffer *etr_perf;
unsigned long size;
etr_perf = kzalloc_node(sizeof(*etr_perf), GFP_KERNEL, node);
if (!etr_perf)
return ERR_PTR(-ENOMEM);
/*
* Try to match the perf ring buffer size if it is larger
* than the size requested via sysfs.
*/
if ((nr_pages << PAGE_SHIFT) > drvdata->size) {
etr_buf = tmc_alloc_etr_buf(drvdata, (nr_pages << PAGE_SHIFT),
0, node, NULL);
if (!IS_ERR(etr_buf))
goto done;
}
/*
* Else switch to configured size for this ETR
* and scale down until we hit the minimum limit.
*/
size = drvdata->size;
do {
etr_buf = tmc_alloc_etr_buf(drvdata, size, 0, node, NULL);
if (!IS_ERR(etr_buf))
goto done;
size /= 2;
} while (size >= TMC_ETR_PERF_MIN_BUF_SIZE);
kfree(etr_perf);
return ERR_PTR(-ENOMEM);
done:
etr_perf->etr_buf = etr_buf;
return etr_perf;
}
static int tmc_enable_etr_sink(struct coresight_device *csdev, u32 mode)
static void *tmc_alloc_etr_buffer(struct coresight_device *csdev,
int cpu, void **pages, int nr_pages,
bool snapshot)
{
struct etr_perf_buffer *etr_perf;
struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
if (cpu == -1)
cpu = smp_processor_id();
etr_perf = tmc_etr_setup_perf_buf(drvdata, cpu_to_node(cpu),
nr_pages, pages, snapshot);
if (IS_ERR(etr_perf)) {
dev_dbg(drvdata->dev, "Unable to allocate ETR buffer\n");
return NULL;
}
etr_perf->snapshot = snapshot;
etr_perf->nr_pages = nr_pages;
etr_perf->pages = pages;
return etr_perf;
}
static void tmc_free_etr_buffer(void *config)
{
struct etr_perf_buffer *etr_perf = config;
if (etr_perf->etr_buf)
tmc_free_etr_buf(etr_perf->etr_buf);
kfree(etr_perf);
}
/*
* tmc_etr_sync_perf_buffer: Copy the actual trace data from the hardware
* buffer to the perf ring buffer.
*/
static void tmc_etr_sync_perf_buffer(struct etr_perf_buffer *etr_perf)
{
long bytes, to_copy;
long pg_idx, pg_offset, src_offset;
unsigned long head = etr_perf->head;
char **dst_pages, *src_buf;
struct etr_buf *etr_buf = etr_perf->etr_buf;
head = etr_perf->head;
pg_idx = head >> PAGE_SHIFT;
pg_offset = head & (PAGE_SIZE - 1);
dst_pages = (char **)etr_perf->pages;
src_offset = etr_buf->offset;
to_copy = etr_buf->len;
while (to_copy > 0) {
/*
* In one iteration, we can copy minimum of :
* 1) what is available in the source buffer,
* 2) what is available in the source buffer, before it
* wraps around.
* 3) what is available in the destination page.
* in one iteration.
*/
bytes = tmc_etr_buf_get_data(etr_buf, src_offset, to_copy,
&src_buf);
if (WARN_ON_ONCE(bytes <= 0))
break;
bytes = min(bytes, (long)(PAGE_SIZE - pg_offset));
memcpy(dst_pages[pg_idx] + pg_offset, src_buf, bytes);
to_copy -= bytes;
/* Move destination pointers */
pg_offset += bytes;
if (pg_offset == PAGE_SIZE) {
pg_offset = 0;
if (++pg_idx == etr_perf->nr_pages)
pg_idx = 0;
}
/* Move source pointers */
src_offset += bytes;
if (src_offset >= etr_buf->size)
src_offset -= etr_buf->size;
}
}
/*
* tmc_update_etr_buffer : Update the perf ring buffer with the
* available trace data. We use software double buffering at the moment.
*
* TODO: Add support for reusing the perf ring buffer.
*/
static unsigned long
tmc_update_etr_buffer(struct coresight_device *csdev,
struct perf_output_handle *handle,
void *config)
{
bool lost = false;
unsigned long flags, size = 0;
struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
struct etr_perf_buffer *etr_perf = config;
struct etr_buf *etr_buf = etr_perf->etr_buf;
spin_lock_irqsave(&drvdata->spinlock, flags);
if (WARN_ON(drvdata->perf_data != etr_perf)) {
lost = true;
spin_unlock_irqrestore(&drvdata->spinlock, flags);
goto out;
}
CS_UNLOCK(drvdata->base);
tmc_flush_and_stop(drvdata);
tmc_sync_etr_buf(drvdata);
CS_LOCK(drvdata->base);
/* Reset perf specific data */
drvdata->perf_data = NULL;
spin_unlock_irqrestore(&drvdata->spinlock, flags);
size = etr_buf->len;
tmc_etr_sync_perf_buffer(etr_perf);
/*
* Update handle->head in snapshot mode. Also update the size to the
* hardware buffer size if there was an overflow.
*/
if (etr_perf->snapshot) {
handle->head += size;
if (etr_buf->full)
size = etr_buf->size;
}
lost |= etr_buf->full;
out:
if (lost)
perf_aux_output_flag(handle, PERF_AUX_FLAG_TRUNCATED);
return size;
}
static int tmc_enable_etr_sink_perf(struct coresight_device *csdev, void *data)
{
int rc = 0;
unsigned long flags;
struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
struct perf_output_handle *handle = data;
struct etr_perf_buffer *etr_perf = etm_perf_sink_config(handle);
spin_lock_irqsave(&drvdata->spinlock, flags);
/*
* There can be only one writer per sink in perf mode. If the sink
* is already open in SYSFS mode, we can't use it.
*/
if (drvdata->mode != CS_MODE_DISABLED || WARN_ON(drvdata->perf_data)) {
rc = -EBUSY;
goto unlock_out;
}
if (WARN_ON(!etr_perf || !etr_perf->etr_buf)) {
rc = -EINVAL;
goto unlock_out;
}
etr_perf->head = PERF_IDX2OFF(handle->head, etr_perf);
drvdata->perf_data = etr_perf;
rc = tmc_etr_enable_hw(drvdata, etr_perf->etr_buf);
if (!rc)
drvdata->mode = CS_MODE_PERF;
unlock_out:
spin_unlock_irqrestore(&drvdata->spinlock, flags);
return rc;
}
static int tmc_enable_etr_sink(struct coresight_device *csdev,
u32 mode, void *data)
{
switch (mode) {
case CS_MODE_SYSFS:
return tmc_enable_etr_sink_sysfs(csdev);
case CS_MODE_PERF:
return tmc_enable_etr_sink_perf(csdev);
return tmc_enable_etr_sink_perf(csdev, data);
}
/* We shouldn't be here */
@ -1123,12 +1411,15 @@ static void tmc_disable_etr_sink(struct coresight_device *csdev)
spin_unlock_irqrestore(&drvdata->spinlock, flags);
dev_info(drvdata->dev, "TMC-ETR disabled\n");
dev_dbg(drvdata->dev, "TMC-ETR disabled\n");
}
static const struct coresight_ops_sink tmc_etr_sink_ops = {
.enable = tmc_enable_etr_sink,
.disable = tmc_disable_etr_sink,
.alloc_buffer = tmc_alloc_etr_buffer,
.update_buffer = tmc_update_etr_buffer,
.free_buffer = tmc_free_etr_buffer,
};
const struct coresight_ops tmc_etr_cs_ops = {
@ -1150,21 +1441,19 @@ int tmc_read_prepare_etr(struct tmc_drvdata *drvdata)
goto out;
}
/* Don't interfere if operated from Perf */
if (drvdata->mode == CS_MODE_PERF) {
/*
* We can safely allow reads even if the ETR is operating in PERF mode,
* since the sysfs session is captured in mode specific data.
* If drvdata::sysfs_data is NULL the trace data has been read already.
*/
if (!drvdata->sysfs_buf) {
ret = -EINVAL;
goto out;
}
/* If drvdata::etr_buf is NULL the trace data has been read already */
if (drvdata->etr_buf == NULL) {
ret = -EINVAL;
goto out;
}
/* Disable the TMC if need be */
/* Disable the TMC if we are trying to read from a running session. */
if (drvdata->mode == CS_MODE_SYSFS)
tmc_etr_disable_hw(drvdata);
__tmc_etr_disable_hw(drvdata);
drvdata->reading = true;
out:
@ -1176,7 +1465,7 @@ out:
int tmc_read_unprepare_etr(struct tmc_drvdata *drvdata)
{
unsigned long flags;
struct etr_buf *etr_buf = NULL;
struct etr_buf *sysfs_buf = NULL;
/* config types are set a boot time and never change */
if (WARN_ON_ONCE(drvdata->config_type != TMC_CONFIG_TYPE_ETR))
@ -1191,22 +1480,22 @@ int tmc_read_unprepare_etr(struct tmc_drvdata *drvdata)
* buffer. Since the tracer is still enabled drvdata::buf can't
* be NULL.
*/
tmc_etr_enable_hw(drvdata);
__tmc_etr_enable_hw(drvdata);
} else {
/*
* The ETR is not tracing and the buffer was just read.
* As such prepare to free the trace buffer.
*/
etr_buf = drvdata->etr_buf;
drvdata->etr_buf = NULL;
sysfs_buf = drvdata->sysfs_buf;
drvdata->sysfs_buf = NULL;
}
drvdata->reading = false;
spin_unlock_irqrestore(&drvdata->spinlock, flags);
/* Free allocated memory out side of the spinlock */
if (etr_buf)
tmc_free_etr_buf(etr_buf);
if (sysfs_buf)
tmc_etr_free_sysfs_buf(sysfs_buf);
return 0;
}

View File

@ -81,7 +81,7 @@ static int tmc_read_prepare(struct tmc_drvdata *drvdata)
}
if (!ret)
dev_info(drvdata->dev, "TMC read start\n");
dev_dbg(drvdata->dev, "TMC read start\n");
return ret;
}
@ -103,7 +103,7 @@ static int tmc_read_unprepare(struct tmc_drvdata *drvdata)
}
if (!ret)
dev_info(drvdata->dev, "TMC read end\n");
dev_dbg(drvdata->dev, "TMC read end\n");
return ret;
}

View File

@ -170,6 +170,8 @@ struct etr_buf {
* @trigger_cntr: amount of words to store after a trigger.
* @etr_caps: Bitmask of capabilities of the TMC ETR, inferred from the
* device configuration register (DEVID)
* @perf_data: PERF buffer for ETR.
* @sysfs_data: SYSFS buffer for ETR.
*/
struct tmc_drvdata {
void __iomem *base;
@ -189,6 +191,8 @@ struct tmc_drvdata {
enum tmc_mem_intf_width memwidth;
u32 trigger_cntr;
u32 etr_caps;
struct etr_buf *sysfs_buf;
void *perf_data;
};
struct etr_buf_operations {

View File

@ -68,13 +68,13 @@ static void tpiu_enable_hw(struct tpiu_drvdata *drvdata)
CS_LOCK(drvdata->base);
}
static int tpiu_enable(struct coresight_device *csdev, u32 mode)
static int tpiu_enable(struct coresight_device *csdev, u32 mode, void *__unused)
{
struct tpiu_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
tpiu_enable_hw(drvdata);
dev_info(drvdata->dev, "TPIU enabled\n");
dev_dbg(drvdata->dev, "TPIU enabled\n");
return 0;
}
@ -100,7 +100,7 @@ static void tpiu_disable(struct coresight_device *csdev)
tpiu_disable_hw(drvdata);
dev_info(drvdata->dev, "TPIU disabled\n");
dev_dbg(drvdata->dev, "TPIU disabled\n");
}
static const struct coresight_ops_sink tpiu_sink_ops = {

View File

@ -128,16 +128,105 @@ static int coresight_find_link_outport(struct coresight_device *csdev,
return -ENODEV;
}
static int coresight_enable_sink(struct coresight_device *csdev, u32 mode)
static inline u32 coresight_read_claim_tags(void __iomem *base)
{
return readl_relaxed(base + CORESIGHT_CLAIMCLR);
}
static inline bool coresight_is_claimed_self_hosted(void __iomem *base)
{
return coresight_read_claim_tags(base) == CORESIGHT_CLAIM_SELF_HOSTED;
}
static inline bool coresight_is_claimed_any(void __iomem *base)
{
return coresight_read_claim_tags(base) != 0;
}
static inline void coresight_set_claim_tags(void __iomem *base)
{
writel_relaxed(CORESIGHT_CLAIM_SELF_HOSTED, base + CORESIGHT_CLAIMSET);
isb();
}
static inline void coresight_clear_claim_tags(void __iomem *base)
{
writel_relaxed(CORESIGHT_CLAIM_SELF_HOSTED, base + CORESIGHT_CLAIMCLR);
isb();
}
/*
* coresight_claim_device_unlocked : Claim the device for self-hosted usage
* to prevent an external tool from touching this device. As per PSCI
* standards, section "Preserving the execution context" => "Debug and Trace
* save and Restore", DBGCLAIM[1] is reserved for Self-hosted debug/trace and
* DBGCLAIM[0] is reserved for external tools.
*
* Called with CS_UNLOCKed for the component.
* Returns : 0 on success
*/
int coresight_claim_device_unlocked(void __iomem *base)
{
if (coresight_is_claimed_any(base))
return -EBUSY;
coresight_set_claim_tags(base);
if (coresight_is_claimed_self_hosted(base))
return 0;
/* There was a race setting the tags, clean up and fail */
coresight_clear_claim_tags(base);
return -EBUSY;
}
int coresight_claim_device(void __iomem *base)
{
int rc;
CS_UNLOCK(base);
rc = coresight_claim_device_unlocked(base);
CS_LOCK(base);
return rc;
}
/*
* coresight_disclaim_device_unlocked : Clear the claim tags for the device.
* Called with CS_UNLOCKed for the component.
*/
void coresight_disclaim_device_unlocked(void __iomem *base)
{
if (coresight_is_claimed_self_hosted(base))
coresight_clear_claim_tags(base);
else
/*
* The external agent may have not honoured our claim
* and has manipulated it. Or something else has seriously
* gone wrong in our driver.
*/
WARN_ON_ONCE(1);
}
void coresight_disclaim_device(void __iomem *base)
{
CS_UNLOCK(base);
coresight_disclaim_device_unlocked(base);
CS_LOCK(base);
}
static int coresight_enable_sink(struct coresight_device *csdev,
u32 mode, void *data)
{
int ret;
if (!csdev->enable) {
if (sink_ops(csdev)->enable) {
ret = sink_ops(csdev)->enable(csdev, mode);
if (ret)
return ret;
}
/*
* We need to make sure the "new" session is compatible with the
* existing "mode" of operation.
*/
if (sink_ops(csdev)->enable) {
ret = sink_ops(csdev)->enable(csdev, mode, data);
if (ret)
return ret;
csdev->enable = true;
}
@ -184,8 +273,10 @@ static int coresight_enable_link(struct coresight_device *csdev,
if (atomic_inc_return(&csdev->refcnt[refport]) == 1) {
if (link_ops(csdev)->enable) {
ret = link_ops(csdev)->enable(csdev, inport, outport);
if (ret)
if (ret) {
atomic_dec(&csdev->refcnt[refport]);
return ret;
}
}
}
@ -274,13 +365,21 @@ static bool coresight_disable_source(struct coresight_device *csdev)
return !csdev->enable;
}
void coresight_disable_path(struct list_head *path)
/*
* coresight_disable_path_from : Disable components in the given path beyond
* @nd in the list. If @nd is NULL, all the components, except the SOURCE are
* disabled.
*/
static void coresight_disable_path_from(struct list_head *path,
struct coresight_node *nd)
{
u32 type;
struct coresight_node *nd;
struct coresight_device *csdev, *parent, *child;
list_for_each_entry(nd, path, link) {
if (!nd)
nd = list_first_entry(path, struct coresight_node, link);
list_for_each_entry_continue(nd, path, link) {
csdev = nd->csdev;
type = csdev->type;
@ -300,7 +399,12 @@ void coresight_disable_path(struct list_head *path)
coresight_disable_sink(csdev);
break;
case CORESIGHT_DEV_TYPE_SOURCE:
/* sources are disabled from either sysFS or Perf */
/*
* We skip the first node in the path assuming that it
* is the source. So we don't expect a source device in
* the middle of a path.
*/
WARN_ON(1);
break;
case CORESIGHT_DEV_TYPE_LINK:
parent = list_prev_entry(nd, link)->csdev;
@ -313,7 +417,12 @@ void coresight_disable_path(struct list_head *path)
}
}
int coresight_enable_path(struct list_head *path, u32 mode)
void coresight_disable_path(struct list_head *path)
{
coresight_disable_path_from(path, NULL);
}
int coresight_enable_path(struct list_head *path, u32 mode, void *sink_data)
{
int ret = 0;
@ -338,9 +447,15 @@ int coresight_enable_path(struct list_head *path, u32 mode)
switch (type) {
case CORESIGHT_DEV_TYPE_SINK:
ret = coresight_enable_sink(csdev, mode);
ret = coresight_enable_sink(csdev, mode, sink_data);
/*
* Sink is the first component turned on. If we
* failed to enable the sink, there are no components
* that need disabling. Disabling the path here
* would mean we could disrupt an existing session.
*/
if (ret)
goto err;
goto out;
break;
case CORESIGHT_DEV_TYPE_SOURCE:
/* sources are enabled from either sysFS or Perf */
@ -360,7 +475,7 @@ int coresight_enable_path(struct list_head *path, u32 mode)
out:
return ret;
err:
coresight_disable_path(path);
coresight_disable_path_from(path, nd);
goto out;
}
@ -635,7 +750,7 @@ int coresight_enable(struct coresight_device *csdev)
goto out;
}
ret = coresight_enable_path(path, CS_MODE_SYSFS);
ret = coresight_enable_path(path, CS_MODE_SYSFS, NULL);
if (ret)
goto err_path;
@ -995,18 +1110,16 @@ postcore_initcall(coresight_init);
struct coresight_device *coresight_register(struct coresight_desc *desc)
{
int i;
int ret;
int link_subtype;
int nr_refcnts = 1;
atomic_t *refcnts = NULL;
struct coresight_device *csdev;
struct coresight_connection *conns = NULL;
csdev = kzalloc(sizeof(*csdev), GFP_KERNEL);
if (!csdev) {
ret = -ENOMEM;
goto err_kzalloc_csdev;
goto err_out;
}
if (desc->type == CORESIGHT_DEV_TYPE_LINK ||
@ -1022,7 +1135,7 @@ struct coresight_device *coresight_register(struct coresight_desc *desc)
refcnts = kcalloc(nr_refcnts, sizeof(*refcnts), GFP_KERNEL);
if (!refcnts) {
ret = -ENOMEM;
goto err_kzalloc_refcnts;
goto err_free_csdev;
}
csdev->refcnt = refcnts;
@ -1030,22 +1143,7 @@ struct coresight_device *coresight_register(struct coresight_desc *desc)
csdev->nr_inport = desc->pdata->nr_inport;
csdev->nr_outport = desc->pdata->nr_outport;
/* Initialise connections if there is at least one outport */
if (csdev->nr_outport) {
conns = kcalloc(csdev->nr_outport, sizeof(*conns), GFP_KERNEL);
if (!conns) {
ret = -ENOMEM;
goto err_kzalloc_conns;
}
for (i = 0; i < csdev->nr_outport; i++) {
conns[i].outport = desc->pdata->outports[i];
conns[i].child_name = desc->pdata->child_names[i];
conns[i].child_port = desc->pdata->child_ports[i];
}
}
csdev->conns = conns;
csdev->conns = desc->pdata->conns;
csdev->type = desc->type;
csdev->subtype = desc->subtype;
@ -1062,7 +1160,11 @@ struct coresight_device *coresight_register(struct coresight_desc *desc)
ret = device_register(&csdev->dev);
if (ret) {
put_device(&csdev->dev);
goto err_kzalloc_csdev;
/*
* All resources are free'd explicitly via
* coresight_device_release(), triggered from put_device().
*/
goto err_out;
}
mutex_lock(&coresight_mutex);
@ -1074,11 +1176,9 @@ struct coresight_device *coresight_register(struct coresight_desc *desc)
return csdev;
err_kzalloc_conns:
kfree(refcnts);
err_kzalloc_refcnts:
err_free_csdev:
kfree(csdev);
err_kzalloc_csdev:
err_out:
return ERR_PTR(ret);
}
EXPORT_SYMBOL_GPL(coresight_register);

View File

@ -45,8 +45,13 @@ of_coresight_get_endpoint_device(struct device_node *endpoint)
endpoint, of_dev_node_match);
}
static void of_coresight_get_ports(const struct device_node *node,
int *nr_inport, int *nr_outport)
static inline bool of_coresight_legacy_ep_is_input(struct device_node *ep)
{
return of_property_read_bool(ep, "slave-mode");
}
static void of_coresight_get_ports_legacy(const struct device_node *node,
int *nr_inport, int *nr_outport)
{
struct device_node *ep = NULL;
int in = 0, out = 0;
@ -56,7 +61,7 @@ static void of_coresight_get_ports(const struct device_node *node,
if (!ep)
break;
if (of_property_read_bool(ep, "slave-mode"))
if (of_coresight_legacy_ep_is_input(ep))
in++;
else
out++;
@ -67,32 +72,77 @@ static void of_coresight_get_ports(const struct device_node *node,
*nr_outport = out;
}
static struct device_node *of_coresight_get_port_parent(struct device_node *ep)
{
struct device_node *parent = of_graph_get_port_parent(ep);
/*
* Skip one-level up to the real device node, if we
* are using the new bindings.
*/
if (!of_node_cmp(parent->name, "in-ports") ||
!of_node_cmp(parent->name, "out-ports"))
parent = of_get_next_parent(parent);
return parent;
}
static inline struct device_node *
of_coresight_get_input_ports_node(const struct device_node *node)
{
return of_get_child_by_name(node, "in-ports");
}
static inline struct device_node *
of_coresight_get_output_ports_node(const struct device_node *node)
{
return of_get_child_by_name(node, "out-ports");
}
static inline int
of_coresight_count_ports(struct device_node *port_parent)
{
int i = 0;
struct device_node *ep = NULL;
while ((ep = of_graph_get_next_endpoint(port_parent, ep)))
i++;
return i;
}
static void of_coresight_get_ports(const struct device_node *node,
int *nr_inport, int *nr_outport)
{
struct device_node *input_ports = NULL, *output_ports = NULL;
input_ports = of_coresight_get_input_ports_node(node);
output_ports = of_coresight_get_output_ports_node(node);
if (input_ports || output_ports) {
if (input_ports) {
*nr_inport = of_coresight_count_ports(input_ports);
of_node_put(input_ports);
}
if (output_ports) {
*nr_outport = of_coresight_count_ports(output_ports);
of_node_put(output_ports);
}
} else {
/* Fall back to legacy DT bindings parsing */
of_coresight_get_ports_legacy(node, nr_inport, nr_outport);
}
}
static int of_coresight_alloc_memory(struct device *dev,
struct coresight_platform_data *pdata)
{
/* List of output port on this component */
pdata->outports = devm_kcalloc(dev,
pdata->nr_outport,
sizeof(*pdata->outports),
GFP_KERNEL);
if (!pdata->outports)
return -ENOMEM;
/* Children connected to this component via @outports */
pdata->child_names = devm_kcalloc(dev,
pdata->nr_outport,
sizeof(*pdata->child_names),
GFP_KERNEL);
if (!pdata->child_names)
return -ENOMEM;
/* Port number on the child this component is connected to */
pdata->child_ports = devm_kcalloc(dev,
pdata->nr_outport,
sizeof(*pdata->child_ports),
GFP_KERNEL);
if (!pdata->child_ports)
return -ENOMEM;
if (pdata->nr_outport) {
pdata->conns = devm_kzalloc(dev, pdata->nr_outport *
sizeof(*pdata->conns),
GFP_KERNEL);
if (!pdata->conns)
return -ENOMEM;
}
return 0;
}
@ -114,17 +164,78 @@ int of_coresight_get_cpu(const struct device_node *node)
}
EXPORT_SYMBOL_GPL(of_coresight_get_cpu);
/*
* of_coresight_parse_endpoint : Parse the given output endpoint @ep
* and fill the connection information in @conn
*
* Parses the local port, remote device name and the remote port.
*
* Returns :
* 1 - If the parsing is successful and a connection record
* was created for an output connection.
* 0 - If the parsing completed without any fatal errors.
* -Errno - Fatal error, abort the scanning.
*/
static int of_coresight_parse_endpoint(struct device *dev,
struct device_node *ep,
struct coresight_connection *conn)
{
int ret = 0;
struct of_endpoint endpoint, rendpoint;
struct device_node *rparent = NULL;
struct device_node *rep = NULL;
struct device *rdev = NULL;
do {
/* Parse the local port details */
if (of_graph_parse_endpoint(ep, &endpoint))
break;
/*
* Get a handle on the remote endpoint and the device it is
* attached to.
*/
rep = of_graph_get_remote_endpoint(ep);
if (!rep)
break;
rparent = of_coresight_get_port_parent(rep);
if (!rparent)
break;
if (of_graph_parse_endpoint(rep, &rendpoint))
break;
/* If the remote device is not available, defer probing */
rdev = of_coresight_get_endpoint_device(rparent);
if (!rdev) {
ret = -EPROBE_DEFER;
break;
}
conn->outport = endpoint.port;
conn->child_name = devm_kstrdup(dev,
dev_name(rdev),
GFP_KERNEL);
conn->child_port = rendpoint.port;
/* Connection record updated */
ret = 1;
} while (0);
of_node_put(rparent);
of_node_put(rep);
put_device(rdev);
return ret;
}
struct coresight_platform_data *
of_get_coresight_platform_data(struct device *dev,
const struct device_node *node)
{
int i = 0, ret = 0;
int ret = 0;
struct coresight_platform_data *pdata;
struct of_endpoint endpoint, rendpoint;
struct device *rdev;
struct coresight_connection *conn;
struct device_node *ep = NULL;
struct device_node *rparent = NULL;
struct device_node *rport = NULL;
const struct device_node *parent = NULL;
bool legacy_binding = false;
pdata = devm_kzalloc(dev, sizeof(*pdata), GFP_KERNEL);
if (!pdata)
@ -132,63 +243,54 @@ of_get_coresight_platform_data(struct device *dev,
/* Use device name as sysfs handle */
pdata->name = dev_name(dev);
pdata->cpu = of_coresight_get_cpu(node);
/* Get the number of input and output port for this component */
of_coresight_get_ports(node, &pdata->nr_inport, &pdata->nr_outport);
if (pdata->nr_outport) {
ret = of_coresight_alloc_memory(dev, pdata);
if (ret)
return ERR_PTR(ret);
/* If there are no output connections, we are done */
if (!pdata->nr_outport)
return pdata;
/* Iterate through each port to discover topology */
do {
/* Get a handle on a port */
ep = of_graph_get_next_endpoint(node, ep);
if (!ep)
break;
ret = of_coresight_alloc_memory(dev, pdata);
if (ret)
return ERR_PTR(ret);
/*
* No need to deal with input ports, processing for as
* processing for output ports will deal with them.
*/
if (of_find_property(ep, "slave-mode", NULL))
continue;
/* Get a handle on the local endpoint */
ret = of_graph_parse_endpoint(ep, &endpoint);
if (ret)
continue;
/* The local out port number */
pdata->outports[i] = endpoint.port;
/*
* Get a handle on the remote port and parent
* attached to it.
*/
rparent = of_graph_get_remote_port_parent(ep);
rport = of_graph_get_remote_port(ep);
if (!rparent || !rport)
continue;
if (of_graph_parse_endpoint(rport, &rendpoint))
continue;
rdev = of_coresight_get_endpoint_device(rparent);
if (!rdev)
return ERR_PTR(-EPROBE_DEFER);
pdata->child_names[i] = dev_name(rdev);
pdata->child_ports[i] = rendpoint.id;
i++;
} while (ep);
parent = of_coresight_get_output_ports_node(node);
/*
* If the DT uses obsoleted bindings, the ports are listed
* under the device and we need to filter out the input
* ports.
*/
if (!parent) {
legacy_binding = true;
parent = node;
dev_warn_once(dev, "Uses obsolete Coresight DT bindings\n");
}
pdata->cpu = of_coresight_get_cpu(node);
conn = pdata->conns;
/* Iterate through each output port to discover topology */
while ((ep = of_graph_get_next_endpoint(parent, ep))) {
/*
* Legacy binding mixes input/output ports under the
* same parent. So, skip the input ports if we are dealing
* with legacy binding, as they processed with their
* connected output ports.
*/
if (legacy_binding && of_coresight_legacy_ep_is_input(ep))
continue;
ret = of_coresight_parse_endpoint(dev, ep, conn);
switch (ret) {
case 1:
conn++; /* Fall through */
case 0:
break;
default:
return ERR_PTR(ret);
}
}
return pdata;
}

View File

@ -11,6 +11,35 @@ config STM
if STM
config STM_PROTO_BASIC
tristate "Basic STM framing protocol driver"
default CONFIG_STM
help
This is a simple framing protocol for sending data over STM
devices. This was the protocol that the STM framework used
exclusively until the MIPI SyS-T support was added. Use this
driver for compatibility with your existing STM setup.
The receiving side only needs to be able to decode the MIPI
STP protocol in order to extract the data.
If you want to be able to use the basic protocol or want the
backwards compatibility for your existing setup, say Y.
config STM_PROTO_SYS_T
tristate "MIPI SyS-T STM framing protocol driver"
default CONFIG_STM
help
This is an implementation of MIPI SyS-T protocol to be used
over the STP transport. In addition to the data payload, it
also carries additional metadata for time correlation, better
means of trace source identification, etc.
The receiving side must be able to decode this protocol in
addition to the MIPI STP, in order to extract the data.
If you don't know what this is, say N.
config STM_DUMMY
tristate "Dummy STM driver"
help

View File

@ -3,6 +3,12 @@ obj-$(CONFIG_STM) += stm_core.o
stm_core-y := core.o policy.o
obj-$(CONFIG_STM_PROTO_BASIC) += stm_p_basic.o
obj-$(CONFIG_STM_PROTO_SYS_T) += stm_p_sys-t.o
stm_p_basic-y := p_basic.o
stm_p_sys-t-y := p_sys-t.o
obj-$(CONFIG_STM_DUMMY) += dummy_stm.o
obj-$(CONFIG_STM_SOURCE_CONSOLE) += stm_console.o

View File

@ -293,15 +293,15 @@ static int stm_output_assign(struct stm_device *stm, unsigned int width,
if (width > stm->data->sw_nchannels)
return -EINVAL;
if (policy_node) {
stp_policy_node_get_ranges(policy_node,
&midx, &mend, &cidx, &cend);
} else {
midx = stm->data->sw_start;
cidx = 0;
mend = stm->data->sw_end;
cend = stm->data->sw_nchannels - 1;
}
/* We no longer accept policy_node==NULL here */
if (WARN_ON_ONCE(!policy_node))
return -EINVAL;
/*
* Also, the caller holds reference to policy_node, so it won't
* disappear on us.
*/
stp_policy_node_get_ranges(policy_node, &midx, &mend, &cidx, &cend);
spin_lock(&stm->mc_lock);
spin_lock(&output->lock);
@ -316,11 +316,26 @@ static int stm_output_assign(struct stm_device *stm, unsigned int width,
output->master = midx;
output->channel = cidx;
output->nr_chans = width;
if (stm->pdrv->output_open) {
void *priv = stp_policy_node_priv(policy_node);
if (WARN_ON_ONCE(!priv))
goto unlock;
/* configfs subsys mutex is held by the caller */
ret = stm->pdrv->output_open(priv, output);
if (ret)
goto unlock;
}
stm_output_claim(stm, output);
dev_dbg(&stm->dev, "assigned %u:%u (+%u)\n", midx, cidx, width);
ret = 0;
unlock:
if (ret)
output->nr_chans = 0;
spin_unlock(&output->lock);
spin_unlock(&stm->mc_lock);
@ -333,6 +348,8 @@ static void stm_output_free(struct stm_device *stm, struct stm_output *output)
spin_lock(&output->lock);
if (output->nr_chans)
stm_output_disclaim(stm, output);
if (stm->pdrv && stm->pdrv->output_close)
stm->pdrv->output_close(output);
spin_unlock(&output->lock);
spin_unlock(&stm->mc_lock);
}
@ -349,6 +366,127 @@ static int major_match(struct device *dev, const void *data)
return MAJOR(dev->devt) == major;
}
/*
* Framing protocol management
* Modules can implement STM protocol drivers and (un-)register them
* with the STM class framework.
*/
static struct list_head stm_pdrv_head;
static struct mutex stm_pdrv_mutex;
struct stm_pdrv_entry {
struct list_head entry;
const struct stm_protocol_driver *pdrv;
const struct config_item_type *node_type;
};
static const struct stm_pdrv_entry *
__stm_lookup_protocol(const char *name)
{
struct stm_pdrv_entry *pe;
/*
* If no name is given (NULL or ""), fall back to "p_basic".
*/
if (!name || !*name)
name = "p_basic";
list_for_each_entry(pe, &stm_pdrv_head, entry) {
if (!strcmp(name, pe->pdrv->name))
return pe;
}
return NULL;
}
int stm_register_protocol(const struct stm_protocol_driver *pdrv)
{
struct stm_pdrv_entry *pe = NULL;
int ret = -ENOMEM;
mutex_lock(&stm_pdrv_mutex);
if (__stm_lookup_protocol(pdrv->name)) {
ret = -EEXIST;
goto unlock;
}
pe = kzalloc(sizeof(*pe), GFP_KERNEL);
if (!pe)
goto unlock;
if (pdrv->policy_attr) {
pe->node_type = get_policy_node_type(pdrv->policy_attr);
if (!pe->node_type)
goto unlock;
}
list_add_tail(&pe->entry, &stm_pdrv_head);
pe->pdrv = pdrv;
ret = 0;
unlock:
mutex_unlock(&stm_pdrv_mutex);
if (ret)
kfree(pe);
return ret;
}
EXPORT_SYMBOL_GPL(stm_register_protocol);
void stm_unregister_protocol(const struct stm_protocol_driver *pdrv)
{
struct stm_pdrv_entry *pe, *iter;
mutex_lock(&stm_pdrv_mutex);
list_for_each_entry_safe(pe, iter, &stm_pdrv_head, entry) {
if (pe->pdrv == pdrv) {
list_del(&pe->entry);
if (pe->node_type) {
kfree(pe->node_type->ct_attrs);
kfree(pe->node_type);
}
kfree(pe);
break;
}
}
mutex_unlock(&stm_pdrv_mutex);
}
EXPORT_SYMBOL_GPL(stm_unregister_protocol);
static bool stm_get_protocol(const struct stm_protocol_driver *pdrv)
{
return try_module_get(pdrv->owner);
}
void stm_put_protocol(const struct stm_protocol_driver *pdrv)
{
module_put(pdrv->owner);
}
int stm_lookup_protocol(const char *name,
const struct stm_protocol_driver **pdrv,
const struct config_item_type **node_type)
{
const struct stm_pdrv_entry *pe;
mutex_lock(&stm_pdrv_mutex);
pe = __stm_lookup_protocol(name);
if (pe && pe->pdrv && stm_get_protocol(pe->pdrv)) {
*pdrv = pe->pdrv;
*node_type = pe->node_type;
}
mutex_unlock(&stm_pdrv_mutex);
return pe ? 0 : -ENOENT;
}
static int stm_char_open(struct inode *inode, struct file *file)
{
struct stm_file *stmf;
@ -405,42 +543,81 @@ static int stm_char_release(struct inode *inode, struct file *file)
return 0;
}
static int stm_file_assign(struct stm_file *stmf, char *id, unsigned int width)
static int
stm_assign_first_policy(struct stm_device *stm, struct stm_output *output,
char **ids, unsigned int width)
{
struct stm_device *stm = stmf->stm;
int ret;
struct stp_policy_node *pn;
int err, n;
stmf->policy_node = stp_policy_node_lookup(stm, id);
/*
* On success, stp_policy_node_lookup() will return holding the
* configfs subsystem mutex, which is then released in
* stp_policy_node_put(). This allows the pdrv->output_open() in
* stm_output_assign() to serialize against the attribute accessors.
*/
for (n = 0, pn = NULL; ids[n] && !pn; n++)
pn = stp_policy_node_lookup(stm, ids[n]);
ret = stm_output_assign(stm, width, stmf->policy_node, &stmf->output);
if (!pn)
return -EINVAL;
if (stmf->policy_node)
stp_policy_node_put(stmf->policy_node);
err = stm_output_assign(stm, width, pn, output);
return ret;
stp_policy_node_put(pn);
return err;
}
static ssize_t notrace stm_write(struct stm_data *data, unsigned int master,
unsigned int channel, const char *buf, size_t count)
/**
* stm_data_write() - send the given payload as data packets
* @data: stm driver's data
* @m: STP master
* @c: STP channel
* @ts_first: timestamp the first packet
* @buf: data payload buffer
* @count: data payload size
*/
ssize_t notrace stm_data_write(struct stm_data *data, unsigned int m,
unsigned int c, bool ts_first, const void *buf,
size_t count)
{
unsigned int flags = STP_PACKET_TIMESTAMPED;
const unsigned char *p = buf, nil = 0;
size_t pos;
unsigned int flags = ts_first ? STP_PACKET_TIMESTAMPED : 0;
ssize_t sz;
size_t pos;
for (pos = 0, p = buf; count > pos; pos += sz, p += sz) {
for (pos = 0, sz = 0; pos < count; pos += sz) {
sz = min_t(unsigned int, count - pos, 8);
sz = data->packet(data, master, channel, STP_PACKET_DATA, flags,
sz, p);
flags = 0;
if (sz < 0)
sz = data->packet(data, m, c, STP_PACKET_DATA, flags, sz,
&((u8 *)buf)[pos]);
if (sz <= 0)
break;
if (ts_first) {
flags = 0;
ts_first = false;
}
}
data->packet(data, master, channel, STP_PACKET_FLAG, 0, 0, &nil);
return sz < 0 ? sz : pos;
}
EXPORT_SYMBOL_GPL(stm_data_write);
return pos;
static ssize_t notrace
stm_write(struct stm_device *stm, struct stm_output *output,
unsigned int chan, const char *buf, size_t count)
{
int err;
/* stm->pdrv is serialized against policy_mutex */
if (!stm->pdrv)
return -ENODEV;
err = stm->pdrv->write(stm->data, output, chan, buf, count);
if (err < 0)
return err;
return err;
}
static ssize_t stm_char_write(struct file *file, const char __user *buf,
@ -455,16 +632,21 @@ static ssize_t stm_char_write(struct file *file, const char __user *buf,
count = PAGE_SIZE - 1;
/*
* if no m/c have been assigned to this writer up to this
* point, use "default" policy entry
* If no m/c have been assigned to this writer up to this
* point, try to use the task name and "default" policy entries.
*/
if (!stmf->output.nr_chans) {
err = stm_file_assign(stmf, "default", 1);
char comm[sizeof(current->comm)];
char *ids[] = { comm, "default", NULL };
get_task_comm(comm, current);
err = stm_assign_first_policy(stmf->stm, &stmf->output, ids, 1);
/*
* EBUSY means that somebody else just assigned this
* output, which is just fine for write()
*/
if (err && err != -EBUSY)
if (err)
return err;
}
@ -480,8 +662,7 @@ static ssize_t stm_char_write(struct file *file, const char __user *buf,
pm_runtime_get_sync(&stm->dev);
count = stm_write(stm->data, stmf->output.master, stmf->output.channel,
kbuf, count);
count = stm_write(stm, &stmf->output, 0, kbuf, count);
pm_runtime_mark_last_busy(&stm->dev);
pm_runtime_put_autosuspend(&stm->dev);
@ -550,6 +731,7 @@ static int stm_char_policy_set_ioctl(struct stm_file *stmf, void __user *arg)
{
struct stm_device *stm = stmf->stm;
struct stp_policy_id *id;
char *ids[] = { NULL, NULL };
int ret = -EINVAL;
u32 size;
@ -582,7 +764,9 @@ static int stm_char_policy_set_ioctl(struct stm_file *stmf, void __user *arg)
id->width > PAGE_SIZE / stm->data->sw_mmiosz)
goto err_free;
ret = stm_file_assign(stmf, id->id, id->width);
ids[0] = id->id;
ret = stm_assign_first_policy(stmf->stm, &stmf->output, ids,
id->width);
if (ret)
goto err_free;
@ -818,8 +1002,8 @@ EXPORT_SYMBOL_GPL(stm_unregister_device);
static int stm_source_link_add(struct stm_source_device *src,
struct stm_device *stm)
{
char *id;
int err;
char *ids[] = { NULL, "default", NULL };
int err = -ENOMEM;
mutex_lock(&stm->link_mutex);
spin_lock(&stm->link_lock);
@ -833,19 +1017,13 @@ static int stm_source_link_add(struct stm_source_device *src,
spin_unlock(&stm->link_lock);
mutex_unlock(&stm->link_mutex);
id = kstrdup(src->data->name, GFP_KERNEL);
if (id) {
src->policy_node =
stp_policy_node_lookup(stm, id);
ids[0] = kstrdup(src->data->name, GFP_KERNEL);
if (!ids[0])
goto fail_detach;
kfree(id);
}
err = stm_output_assign(stm, src->data->nr_chans,
src->policy_node, &src->output);
if (src->policy_node)
stp_policy_node_put(src->policy_node);
err = stm_assign_first_policy(stm, &src->output, ids,
src->data->nr_chans);
kfree(ids[0]);
if (err)
goto fail_detach;
@ -1134,9 +1312,7 @@ int notrace stm_source_write(struct stm_source_data *data,
stm = srcu_dereference(src->link, &stm_source_srcu);
if (stm)
count = stm_write(stm->data, src->output.master,
src->output.channel + chan,
buf, count);
count = stm_write(stm, &src->output, chan, buf, count);
else
count = -ENODEV;
@ -1163,7 +1339,15 @@ static int __init stm_core_init(void)
goto err_src;
init_srcu_struct(&stm_source_srcu);
INIT_LIST_HEAD(&stm_pdrv_head);
mutex_init(&stm_pdrv_mutex);
/*
* So as to not confuse existing users with a requirement
* to load yet another module, do it here.
*/
if (IS_ENABLED(CONFIG_STM_PROTO_BASIC))
(void)request_module_nowait("stm_p_basic");
stm_core_up++;
return 0;

View File

@ -76,7 +76,7 @@ static int stm_heartbeat_init(void)
goto fail_unregister;
stm_heartbeat[i].data.nr_chans = 1;
stm_heartbeat[i].data.link = stm_heartbeat_link;
stm_heartbeat[i].data.link = stm_heartbeat_link;
stm_heartbeat[i].data.unlink = stm_heartbeat_unlink;
hrtimer_init(&stm_heartbeat[i].hrtimer, CLOCK_MONOTONIC,
HRTIMER_MODE_ABS);

View File

@ -0,0 +1,48 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Basic framing protocol for STM devices.
* Copyright (c) 2018, Intel Corporation.
*/
#include <linux/module.h>
#include <linux/device.h>
#include <linux/stm.h>
#include "stm.h"
static ssize_t basic_write(struct stm_data *data, struct stm_output *output,
unsigned int chan, const char *buf, size_t count)
{
unsigned int c = output->channel + chan;
unsigned int m = output->master;
const unsigned char nil = 0;
ssize_t sz;
sz = stm_data_write(data, m, c, true, buf, count);
if (sz > 0)
data->packet(data, m, c, STP_PACKET_FLAG, 0, 0, &nil);
return sz;
}
static const struct stm_protocol_driver basic_pdrv = {
.owner = THIS_MODULE,
.name = "p_basic",
.write = basic_write,
};
static int basic_stm_init(void)
{
return stm_register_protocol(&basic_pdrv);
}
static void basic_stm_exit(void)
{
stm_unregister_protocol(&basic_pdrv);
}
module_init(basic_stm_init);
module_exit(basic_stm_exit);
MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("Basic STM framing protocol driver");
MODULE_AUTHOR("Alexander Shishkin <alexander.shishkin@linux.intel.com>");

View File

@ -0,0 +1,382 @@
// SPDX-License-Identifier: GPL-2.0
/*
* MIPI SyS-T framing protocol for STM devices.
* Copyright (c) 2018, Intel Corporation.
*/
#include <linux/configfs.h>
#include <linux/module.h>
#include <linux/device.h>
#include <linux/slab.h>
#include <linux/uuid.h>
#include <linux/stm.h>
#include "stm.h"
enum sys_t_message_type {
MIPI_SYST_TYPE_BUILD = 0,
MIPI_SYST_TYPE_SHORT32,
MIPI_SYST_TYPE_STRING,
MIPI_SYST_TYPE_CATALOG,
MIPI_SYST_TYPE_RAW = 6,
MIPI_SYST_TYPE_SHORT64,
MIPI_SYST_TYPE_CLOCK,
};
enum sys_t_message_severity {
MIPI_SYST_SEVERITY_MAX = 0,
MIPI_SYST_SEVERITY_FATAL,
MIPI_SYST_SEVERITY_ERROR,
MIPI_SYST_SEVERITY_WARNING,
MIPI_SYST_SEVERITY_INFO,
MIPI_SYST_SEVERITY_USER1,
MIPI_SYST_SEVERITY_USER2,
MIPI_SYST_SEVERITY_DEBUG,
};
enum sys_t_message_build_subtype {
MIPI_SYST_BUILD_ID_COMPACT32 = 0,
MIPI_SYST_BUILD_ID_COMPACT64,
MIPI_SYST_BUILD_ID_LONG,
};
enum sys_t_message_clock_subtype {
MIPI_SYST_CLOCK_TRANSPORT_SYNC = 1,
};
enum sys_t_message_string_subtype {
MIPI_SYST_STRING_GENERIC = 1,
MIPI_SYST_STRING_FUNCTIONENTER,
MIPI_SYST_STRING_FUNCTIONEXIT,
MIPI_SYST_STRING_INVALIDPARAM = 5,
MIPI_SYST_STRING_ASSERT = 7,
MIPI_SYST_STRING_PRINTF_32 = 11,
MIPI_SYST_STRING_PRINTF_64 = 12,
};
#define MIPI_SYST_TYPE(t) ((u32)(MIPI_SYST_TYPE_ ## t))
#define MIPI_SYST_SEVERITY(s) ((u32)(MIPI_SYST_SEVERITY_ ## s) << 4)
#define MIPI_SYST_OPT_LOC BIT(8)
#define MIPI_SYST_OPT_LEN BIT(9)
#define MIPI_SYST_OPT_CHK BIT(10)
#define MIPI_SYST_OPT_TS BIT(11)
#define MIPI_SYST_UNIT(u) ((u32)(u) << 12)
#define MIPI_SYST_ORIGIN(o) ((u32)(o) << 16)
#define MIPI_SYST_OPT_GUID BIT(23)
#define MIPI_SYST_SUBTYPE(s) ((u32)(MIPI_SYST_ ## s) << 24)
#define MIPI_SYST_UNITLARGE(u) (MIPI_SYST_UNIT(u & 0xf) | \
MIPI_SYST_ORIGIN(u >> 4))
#define MIPI_SYST_TYPES(t, s) (MIPI_SYST_TYPE(t) | \
MIPI_SYST_SUBTYPE(t ## _ ## s))
#define DATA_HEADER (MIPI_SYST_TYPES(STRING, GENERIC) | \
MIPI_SYST_SEVERITY(INFO) | \
MIPI_SYST_OPT_GUID)
#define CLOCK_SYNC_HEADER (MIPI_SYST_TYPES(CLOCK, TRANSPORT_SYNC) | \
MIPI_SYST_SEVERITY(MAX))
struct sys_t_policy_node {
uuid_t uuid;
bool do_len;
unsigned long ts_interval;
unsigned long clocksync_interval;
};
struct sys_t_output {
struct sys_t_policy_node node;
unsigned long ts_jiffies;
unsigned long clocksync_jiffies;
};
static void sys_t_policy_node_init(void *priv)
{
struct sys_t_policy_node *pn = priv;
generate_random_uuid(pn->uuid.b);
}
static int sys_t_output_open(void *priv, struct stm_output *output)
{
struct sys_t_policy_node *pn = priv;
struct sys_t_output *opriv;
opriv = kzalloc(sizeof(*opriv), GFP_ATOMIC);
if (!opriv)
return -ENOMEM;
memcpy(&opriv->node, pn, sizeof(opriv->node));
output->pdrv_private = opriv;
return 0;
}
static void sys_t_output_close(struct stm_output *output)
{
kfree(output->pdrv_private);
}
static ssize_t sys_t_policy_uuid_show(struct config_item *item,
char *page)
{
struct sys_t_policy_node *pn = to_pdrv_policy_node(item);
return sprintf(page, "%pU\n", &pn->uuid);
}
static ssize_t
sys_t_policy_uuid_store(struct config_item *item, const char *page,
size_t count)
{
struct mutex *mutexp = &item->ci_group->cg_subsys->su_mutex;
struct sys_t_policy_node *pn = to_pdrv_policy_node(item);
int ret;
mutex_lock(mutexp);
ret = uuid_parse(page, &pn->uuid);
mutex_unlock(mutexp);
return ret < 0 ? ret : count;
}
CONFIGFS_ATTR(sys_t_policy_, uuid);
static ssize_t sys_t_policy_do_len_show(struct config_item *item,
char *page)
{
struct sys_t_policy_node *pn = to_pdrv_policy_node(item);
return sprintf(page, "%d\n", pn->do_len);
}
static ssize_t
sys_t_policy_do_len_store(struct config_item *item, const char *page,
size_t count)
{
struct mutex *mutexp = &item->ci_group->cg_subsys->su_mutex;
struct sys_t_policy_node *pn = to_pdrv_policy_node(item);
int ret;
mutex_lock(mutexp);
ret = kstrtobool(page, &pn->do_len);
mutex_unlock(mutexp);
return ret ? ret : count;
}
CONFIGFS_ATTR(sys_t_policy_, do_len);
static ssize_t sys_t_policy_ts_interval_show(struct config_item *item,
char *page)
{
struct sys_t_policy_node *pn = to_pdrv_policy_node(item);
return sprintf(page, "%u\n", jiffies_to_msecs(pn->ts_interval));
}
static ssize_t
sys_t_policy_ts_interval_store(struct config_item *item, const char *page,
size_t count)
{
struct mutex *mutexp = &item->ci_group->cg_subsys->su_mutex;
struct sys_t_policy_node *pn = to_pdrv_policy_node(item);
unsigned int ms;
int ret;
mutex_lock(mutexp);
ret = kstrtouint(page, 10, &ms);
mutex_unlock(mutexp);
if (!ret) {
pn->ts_interval = msecs_to_jiffies(ms);
return count;
}
return ret;
}
CONFIGFS_ATTR(sys_t_policy_, ts_interval);
static ssize_t sys_t_policy_clocksync_interval_show(struct config_item *item,
char *page)
{
struct sys_t_policy_node *pn = to_pdrv_policy_node(item);
return sprintf(page, "%u\n", jiffies_to_msecs(pn->clocksync_interval));
}
static ssize_t
sys_t_policy_clocksync_interval_store(struct config_item *item,
const char *page, size_t count)
{
struct mutex *mutexp = &item->ci_group->cg_subsys->su_mutex;
struct sys_t_policy_node *pn = to_pdrv_policy_node(item);
unsigned int ms;
int ret;
mutex_lock(mutexp);
ret = kstrtouint(page, 10, &ms);
mutex_unlock(mutexp);
if (!ret) {
pn->clocksync_interval = msecs_to_jiffies(ms);
return count;
}
return ret;
}
CONFIGFS_ATTR(sys_t_policy_, clocksync_interval);
static struct configfs_attribute *sys_t_policy_attrs[] = {
&sys_t_policy_attr_uuid,
&sys_t_policy_attr_do_len,
&sys_t_policy_attr_ts_interval,
&sys_t_policy_attr_clocksync_interval,
NULL,
};
static inline bool sys_t_need_ts(struct sys_t_output *op)
{
if (op->node.ts_interval &&
time_after(op->ts_jiffies + op->node.ts_interval, jiffies)) {
op->ts_jiffies = jiffies;
return true;
}
return false;
}
static bool sys_t_need_clock_sync(struct sys_t_output *op)
{
if (op->node.clocksync_interval &&
time_after(op->clocksync_jiffies + op->node.clocksync_interval,
jiffies)) {
op->clocksync_jiffies = jiffies;
return true;
}
return false;
}
static ssize_t
sys_t_clock_sync(struct stm_data *data, unsigned int m, unsigned int c)
{
u32 header = CLOCK_SYNC_HEADER;
const unsigned char nil = 0;
u64 payload[2]; /* Clock value and frequency */
ssize_t sz;
sz = data->packet(data, m, c, STP_PACKET_DATA, STP_PACKET_TIMESTAMPED,
4, (u8 *)&header);
if (sz <= 0)
return sz;
payload[0] = ktime_get_real_ns();
payload[1] = NSEC_PER_SEC;
sz = stm_data_write(data, m, c, false, &payload, sizeof(payload));
if (sz <= 0)
return sz;
data->packet(data, m, c, STP_PACKET_FLAG, 0, 0, &nil);
return sizeof(header) + sizeof(payload);
}
static ssize_t sys_t_write(struct stm_data *data, struct stm_output *output,
unsigned int chan, const char *buf, size_t count)
{
struct sys_t_output *op = output->pdrv_private;
unsigned int c = output->channel + chan;
unsigned int m = output->master;
const unsigned char nil = 0;
u32 header = DATA_HEADER;
ssize_t sz;
/* We require an existing policy node to proceed */
if (!op)
return -EINVAL;
if (sys_t_need_clock_sync(op)) {
sz = sys_t_clock_sync(data, m, c);
if (sz <= 0)
return sz;
}
if (op->node.do_len)
header |= MIPI_SYST_OPT_LEN;
if (sys_t_need_ts(op))
header |= MIPI_SYST_OPT_TS;
/*
* STP framing rules for SyS-T frames:
* * the first packet of the SyS-T frame is timestamped;
* * the last packet is a FLAG.
*/
/* Message layout: HEADER / GUID / [LENGTH /][TIMESTAMP /] DATA */
/* HEADER */
sz = data->packet(data, m, c, STP_PACKET_DATA, STP_PACKET_TIMESTAMPED,
4, (u8 *)&header);
if (sz <= 0)
return sz;
/* GUID */
sz = stm_data_write(data, m, c, false, op->node.uuid.b, UUID_SIZE);
if (sz <= 0)
return sz;
/* [LENGTH] */
if (op->node.do_len) {
u16 length = count;
sz = data->packet(data, m, c, STP_PACKET_DATA, 0, 2,
(u8 *)&length);
if (sz <= 0)
return sz;
}
/* [TIMESTAMP] */
if (header & MIPI_SYST_OPT_TS) {
u64 ts = ktime_get_real_ns();
sz = stm_data_write(data, m, c, false, &ts, sizeof(ts));
if (sz <= 0)
return sz;
}
/* DATA */
sz = stm_data_write(data, m, c, false, buf, count);
if (sz > 0)
data->packet(data, m, c, STP_PACKET_FLAG, 0, 0, &nil);
return sz;
}
static const struct stm_protocol_driver sys_t_pdrv = {
.owner = THIS_MODULE,
.name = "p_sys-t",
.priv_sz = sizeof(struct sys_t_policy_node),
.write = sys_t_write,
.policy_attr = sys_t_policy_attrs,
.policy_node_init = sys_t_policy_node_init,
.output_open = sys_t_output_open,
.output_close = sys_t_output_close,
};
static int sys_t_stm_init(void)
{
return stm_register_protocol(&sys_t_pdrv);
}
static void sys_t_stm_exit(void)
{
stm_unregister_protocol(&sys_t_pdrv);
}
module_init(sys_t_stm_init);
module_exit(sys_t_stm_exit);
MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("MIPI SyS-T STM framing protocol driver");
MODULE_AUTHOR("Alexander Shishkin <alexander.shishkin@linux.intel.com>");

View File

@ -33,8 +33,18 @@ struct stp_policy_node {
unsigned int last_master;
unsigned int first_channel;
unsigned int last_channel;
/* this is the one that's exposed to the attributes */
unsigned char priv[0];
};
void *stp_policy_node_priv(struct stp_policy_node *pn)
{
if (!pn)
return NULL;
return pn->priv;
}
static struct configfs_subsystem stp_policy_subsys;
void stp_policy_node_get_ranges(struct stp_policy_node *policy_node,
@ -68,6 +78,14 @@ to_stp_policy_node(struct config_item *item)
NULL;
}
void *to_pdrv_policy_node(struct config_item *item)
{
struct stp_policy_node *node = to_stp_policy_node(item);
return stp_policy_node_priv(node);
}
EXPORT_SYMBOL_GPL(to_pdrv_policy_node);
static ssize_t
stp_policy_node_masters_show(struct config_item *item, char *page)
{
@ -163,7 +181,9 @@ unlock:
static void stp_policy_node_release(struct config_item *item)
{
kfree(to_stp_policy_node(item));
struct stp_policy_node *node = to_stp_policy_node(item);
kfree(node);
}
static struct configfs_item_operations stp_policy_node_item_ops = {
@ -182,10 +202,34 @@ static struct configfs_attribute *stp_policy_node_attrs[] = {
static const struct config_item_type stp_policy_type;
static const struct config_item_type stp_policy_node_type;
const struct config_item_type *
get_policy_node_type(struct configfs_attribute **attrs)
{
struct config_item_type *type;
struct configfs_attribute **merged;
type = kmemdup(&stp_policy_node_type, sizeof(stp_policy_node_type),
GFP_KERNEL);
if (!type)
return NULL;
merged = memcat_p(stp_policy_node_attrs, attrs);
if (!merged) {
kfree(type);
return NULL;
}
type->ct_attrs = merged;
return type;
}
static struct config_group *
stp_policy_node_make(struct config_group *group, const char *name)
{
const struct config_item_type *type = &stp_policy_node_type;
struct stp_policy_node *policy_node, *parent_node;
const struct stm_protocol_driver *pdrv;
struct stp_policy *policy;
if (group->cg_item.ci_type == &stp_policy_type) {
@ -199,12 +243,20 @@ stp_policy_node_make(struct config_group *group, const char *name)
if (!policy->stm)
return ERR_PTR(-ENODEV);
policy_node = kzalloc(sizeof(struct stp_policy_node), GFP_KERNEL);
pdrv = policy->stm->pdrv;
policy_node =
kzalloc(offsetof(struct stp_policy_node, priv[pdrv->priv_sz]),
GFP_KERNEL);
if (!policy_node)
return ERR_PTR(-ENOMEM);
config_group_init_type_name(&policy_node->group, name,
&stp_policy_node_type);
if (pdrv->policy_node_init)
pdrv->policy_node_init((void *)policy_node->priv);
if (policy->stm->pdrv_node_type)
type = policy->stm->pdrv_node_type;
config_group_init_type_name(&policy_node->group, name, type);
policy_node->policy = policy;
@ -254,8 +306,25 @@ static ssize_t stp_policy_device_show(struct config_item *item,
CONFIGFS_ATTR_RO(stp_policy_, device);
static ssize_t stp_policy_protocol_show(struct config_item *item,
char *page)
{
struct stp_policy *policy = to_stp_policy(item);
ssize_t count;
count = sprintf(page, "%s\n",
(policy && policy->stm) ?
policy->stm->pdrv->name :
"<none>");
return count;
}
CONFIGFS_ATTR_RO(stp_policy_, protocol);
static struct configfs_attribute *stp_policy_attrs[] = {
&stp_policy_attr_device,
&stp_policy_attr_protocol,
NULL,
};
@ -276,6 +345,7 @@ void stp_policy_unbind(struct stp_policy *policy)
stm->policy = NULL;
policy->stm = NULL;
stm_put_protocol(stm->pdrv);
stm_put_device(stm);
}
@ -311,11 +381,14 @@ static const struct config_item_type stp_policy_type = {
};
static struct config_group *
stp_policies_make(struct config_group *group, const char *name)
stp_policy_make(struct config_group *group, const char *name)
{
const struct config_item_type *pdrv_node_type;
const struct stm_protocol_driver *pdrv;
char *devname, *proto, *p;
struct config_group *ret;
struct stm_device *stm;
char *devname, *p;
int err;
devname = kasprintf(GFP_KERNEL, "%s", name);
if (!devname)
@ -326,6 +399,7 @@ stp_policies_make(struct config_group *group, const char *name)
* <device_name> is the name of an existing stm device; may
* contain dots;
* <policy_name> is an arbitrary string; may not contain dots
* <device_name>:<protocol_name>.<policy_name>
*/
p = strrchr(devname, '.');
if (!p) {
@ -335,11 +409,28 @@ stp_policies_make(struct config_group *group, const char *name)
*p = '\0';
/*
* look for ":<protocol_name>":
* + no protocol suffix: fall back to whatever is available;
* + unknown protocol: fail the whole thing
*/
proto = strrchr(devname, ':');
if (proto)
*proto++ = '\0';
stm = stm_find_device(devname);
if (!stm) {
kfree(devname);
return ERR_PTR(-ENODEV);
}
err = stm_lookup_protocol(proto, &pdrv, &pdrv_node_type);
kfree(devname);
if (!stm)
if (err) {
stm_put_device(stm);
return ERR_PTR(-ENODEV);
}
mutex_lock(&stm->policy_mutex);
if (stm->policy) {
@ -349,31 +440,37 @@ stp_policies_make(struct config_group *group, const char *name)
stm->policy = kzalloc(sizeof(*stm->policy), GFP_KERNEL);
if (!stm->policy) {
ret = ERR_PTR(-ENOMEM);
goto unlock_policy;
mutex_unlock(&stm->policy_mutex);
stm_put_protocol(pdrv);
stm_put_device(stm);
return ERR_PTR(-ENOMEM);
}
config_group_init_type_name(&stm->policy->group, name,
&stp_policy_type);
stm->policy->stm = stm;
stm->pdrv = pdrv;
stm->pdrv_node_type = pdrv_node_type;
stm->policy->stm = stm;
ret = &stm->policy->group;
unlock_policy:
mutex_unlock(&stm->policy_mutex);
if (IS_ERR(ret))
if (IS_ERR(ret)) {
stm_put_protocol(stm->pdrv);
stm_put_device(stm);
}
return ret;
}
static struct configfs_group_operations stp_policies_group_ops = {
.make_group = stp_policies_make,
static struct configfs_group_operations stp_policy_root_group_ops = {
.make_group = stp_policy_make,
};
static const struct config_item_type stp_policies_type = {
.ct_group_ops = &stp_policies_group_ops,
static const struct config_item_type stp_policy_root_type = {
.ct_group_ops = &stp_policy_root_group_ops,
.ct_owner = THIS_MODULE,
};
@ -381,7 +478,7 @@ static struct configfs_subsystem stp_policy_subsys = {
.su_group = {
.cg_item = {
.ci_namebuf = "stp-policy",
.ci_type = &stp_policies_type,
.ci_type = &stp_policy_root_type,
},
},
};
@ -392,7 +489,7 @@ static struct configfs_subsystem stp_policy_subsys = {
static struct stp_policy_node *
__stp_policy_node_lookup(struct stp_policy *policy, char *s)
{
struct stp_policy_node *policy_node, *ret;
struct stp_policy_node *policy_node, *ret = NULL;
struct list_head *head = &policy->group.cg_children;
struct config_item *item;
char *start, *end = s;
@ -400,10 +497,6 @@ __stp_policy_node_lookup(struct stp_policy *policy, char *s)
if (list_empty(head))
return NULL;
/* return the first entry if everything else fails */
item = list_entry(head->next, struct config_item, ci_entry);
ret = to_stp_policy_node(item);
next:
for (;;) {
start = strsep(&end, "/");
@ -449,25 +542,25 @@ stp_policy_node_lookup(struct stm_device *stm, char *s)
if (policy_node)
config_item_get(&policy_node->group.cg_item);
mutex_unlock(&stp_policy_subsys.su_mutex);
else
mutex_unlock(&stp_policy_subsys.su_mutex);
return policy_node;
}
void stp_policy_node_put(struct stp_policy_node *policy_node)
{
lockdep_assert_held(&stp_policy_subsys.su_mutex);
mutex_unlock(&stp_policy_subsys.su_mutex);
config_item_put(&policy_node->group.cg_item);
}
int __init stp_configfs_init(void)
{
int err;
config_group_init(&stp_policy_subsys.su_group);
mutex_init(&stp_policy_subsys.su_mutex);
err = configfs_register_subsystem(&stp_policy_subsys);
return err;
return configfs_register_subsystem(&stp_policy_subsys);
}
void __exit stp_configfs_exit(void)

View File

@ -10,20 +10,17 @@
#ifndef _STM_STM_H_
#define _STM_STM_H_
#include <linux/configfs.h>
struct stp_policy;
struct stp_policy_node;
struct stm_protocol_driver;
struct stp_policy_node *
stp_policy_node_lookup(struct stm_device *stm, char *s);
void stp_policy_node_put(struct stp_policy_node *policy_node);
void stp_policy_unbind(struct stp_policy *policy);
void stp_policy_node_get_ranges(struct stp_policy_node *policy_node,
unsigned int *mstart, unsigned int *mend,
unsigned int *cstart, unsigned int *cend);
int stp_configfs_init(void);
void stp_configfs_exit(void);
void *stp_policy_node_priv(struct stp_policy_node *pn);
struct stp_master {
unsigned int nr_free;
unsigned long chan_map[0];
@ -40,6 +37,9 @@ struct stm_device {
struct mutex link_mutex;
spinlock_t link_lock;
struct list_head link_list;
/* framing protocol in use */
const struct stm_protocol_driver *pdrv;
const struct config_item_type *pdrv_node_type;
/* master allocation */
spinlock_t mc_lock;
struct stp_master *masters[0];
@ -48,16 +48,28 @@ struct stm_device {
#define to_stm_device(_d) \
container_of((_d), struct stm_device, dev)
struct stp_policy_node *
stp_policy_node_lookup(struct stm_device *stm, char *s);
void stp_policy_node_put(struct stp_policy_node *policy_node);
void stp_policy_unbind(struct stp_policy *policy);
void stp_policy_node_get_ranges(struct stp_policy_node *policy_node,
unsigned int *mstart, unsigned int *mend,
unsigned int *cstart, unsigned int *cend);
const struct config_item_type *
get_policy_node_type(struct configfs_attribute **attrs);
struct stm_output {
spinlock_t lock;
unsigned int master;
unsigned int channel;
unsigned int nr_chans;
void *pdrv_private;
};
struct stm_file {
struct stm_device *stm;
struct stp_policy_node *policy_node;
struct stm_output output;
};
@ -71,11 +83,35 @@ struct stm_source_device {
struct stm_device __rcu *link;
struct list_head link_entry;
/* one output per stm_source device */
struct stp_policy_node *policy_node;
struct stm_output output;
};
#define to_stm_source_device(_d) \
container_of((_d), struct stm_source_device, dev)
void *to_pdrv_policy_node(struct config_item *item);
struct stm_protocol_driver {
struct module *owner;
const char *name;
ssize_t (*write)(struct stm_data *data,
struct stm_output *output, unsigned int chan,
const char *buf, size_t count);
void (*policy_node_init)(void *arg);
int (*output_open)(void *priv, struct stm_output *output);
void (*output_close)(struct stm_output *output);
ssize_t priv_sz;
struct configfs_attribute **policy_attr;
};
int stm_register_protocol(const struct stm_protocol_driver *pdrv);
void stm_unregister_protocol(const struct stm_protocol_driver *pdrv);
int stm_lookup_protocol(const char *name,
const struct stm_protocol_driver **pdrv,
const struct config_item_type **type);
void stm_put_protocol(const struct stm_protocol_driver *pdrv);
ssize_t stm_data_write(struct stm_data *data, unsigned int m,
unsigned int c, bool ts_first, const void *buf,
size_t count);
#endif /* _STM_STM_H_ */

View File

@ -114,6 +114,6 @@ static struct i2c_driver ad_dpot_i2c_driver = {
module_i2c_driver(ad_dpot_i2c_driver);
MODULE_AUTHOR("Michael Hennerich <hennerich@blackfin.uclinux.org>");
MODULE_AUTHOR("Michael Hennerich <michael.hennerich@analog.com>");
MODULE_DESCRIPTION("digital potentiometer I2C bus driver");
MODULE_LICENSE("GPL");

View File

@ -140,7 +140,7 @@ static struct spi_driver ad_dpot_spi_driver = {
module_spi_driver(ad_dpot_spi_driver);
MODULE_AUTHOR("Michael Hennerich <hennerich@blackfin.uclinux.org>");
MODULE_AUTHOR("Michael Hennerich <michael.hennerich@analog.com>");
MODULE_DESCRIPTION("digital potentiometer SPI bus driver");
MODULE_LICENSE("GPL");
MODULE_ALIAS("spi:ad_dpot");

View File

@ -1,7 +1,7 @@
/*
* ad525x_dpot: Driver for the Analog Devices digital potentiometers
* Copyright (c) 2009-2010 Analog Devices, Inc.
* Author: Michael Hennerich <hennerich@blackfin.uclinux.org>
* Author: Michael Hennerich <michael.hennerich@analog.com>
*
* DEVID #Wipers #Positions Resistor Options (kOhm)
* AD5258 1 64 1, 10, 50, 100
@ -64,7 +64,7 @@
* Author: Chris Verges <chrisv@cyberswitching.com>
*
* derived from ad5252.c
* Copyright (c) 2006-2011 Michael Hennerich <hennerich@blackfin.uclinux.org>
* Copyright (c) 2006-2011 Michael Hennerich <michael.hennerich@analog.com>
*
* Licensed under the GPL-2 or later.
*/
@ -760,6 +760,6 @@ EXPORT_SYMBOL(ad_dpot_remove);
MODULE_AUTHOR("Chris Verges <chrisv@cyberswitching.com>, "
"Michael Hennerich <hennerich@blackfin.uclinux.org>");
"Michael Hennerich <michael.hennerich@analog.com>");
MODULE_DESCRIPTION("Digital potentiometer driver");
MODULE_LICENSE("GPL");

View File

@ -188,7 +188,6 @@ struct apds990x_chip {
#define APDS_LUX_DEFAULT_RATE 200
static const u8 again[] = {1, 8, 16, 120}; /* ALS gain steps */
static const u8 ir_currents[] = {100, 50, 25, 12}; /* IRled currents in mA */
/* Following two tables must match i.e 10Hz rate means 1 as persistence value */
static const u16 arates_hz[] = {10, 5, 2, 1};

View File

@ -180,9 +180,6 @@ static const char reg_vleds[] = "Vleds";
static const s16 prox_rates_hz[] = {100, 50, 33, 25, 14, 10, 5, 2};
static const s16 prox_rates_ms[] = {10, 20, 30, 40, 70, 100, 200, 500};
/* Supported IR-led currents in mA */
static const u8 prox_curr_ma[] = {5, 10, 20, 50, 100, 150, 200};
/*
* Supported stand alone rates in ms from chip data sheet
* {100, 200, 500, 1000, 2000};

View File

@ -92,8 +92,8 @@ static int update_property(struct device_node *dn, const char *name,
val = (u32 *)new_prop->value;
rc = cxl_update_properties(dn, new_prop);
pr_devel("%s: update property (%s, length: %i, value: %#x)\n",
dn->name, name, vd, be32_to_cpu(*val));
pr_devel("%pOFn: update property (%s, length: %i, value: %#x)\n",
dn, name, vd, be32_to_cpu(*val));
if (rc) {
kfree(new_prop->name);

View File

@ -1018,8 +1018,6 @@ err1:
void cxl_guest_remove_afu(struct cxl_afu *afu)
{
pr_devel("in %s - AFU(%d)\n", __func__, afu->slice);
if (!afu)
return;

View File

@ -381,7 +381,7 @@ int16_t oslec_update(struct oslec_state *ec, int16_t tx, int16_t rx)
*/
ec->factor = 0;
ec->shift = 0;
if ((ec->nonupdate_dwell == 0)) {
if (!ec->nonupdate_dwell) {
int p, logp, shift;
/* Determine:

View File

@ -111,4 +111,15 @@ config EEPROM_IDT_89HPESX
This driver can also be built as a module. If so, the module
will be called idt_89hpesx.
config EEPROM_EE1004
tristate "SPD EEPROMs on DDR4 memory modules"
depends on I2C && SYSFS
help
Enable this driver to get read support to SPD EEPROMs following
the JEDEC EE1004 standard. These are typically found on DDR4
SDRAM memory modules.
This driver can also be built as a module. If so, the module
will be called ee1004.
endmenu

View File

@ -7,3 +7,4 @@ obj-$(CONFIG_EEPROM_93CX6) += eeprom_93cx6.o
obj-$(CONFIG_EEPROM_93XX46) += eeprom_93xx46.o
obj-$(CONFIG_EEPROM_DIGSY_MTC_CFG) += digsy_mtc_eeprom.o
obj-$(CONFIG_EEPROM_IDT_89HPESX) += idt_89hpesx.o
obj-$(CONFIG_EEPROM_EE1004) += ee1004.o

Some files were not shown because too many files have changed in this diff Show More