Commit graph

63 commits

Author SHA1 Message Date
Amelie Delaunay
140fd5e74a dmaengine: stm32-dma: fix potential race between pause and resume
When disabling dma channel, a TCF flag is set and as TCIE is enabled, an
interrupt is raised.
On a busy system, the interrupt may have latency and the user can ask for
dmaengine_resume while stm32-dma driver has not yet managed the complete
pause (backup of registers to restore state in resume).
To avoid such a case, instead of waiting the interrupt to backup the
registers, do it just after disabling the channel and discard Transfer
Complete interrupt in case the channel is paused.

Fixes: 099a9a94be ("dmaengine: stm32-dma: add device_pause/device_resume support")
Signed-off-by: Amelie Delaunay <amelie.delaunay@foss.st.com>
Link: https://lore.kernel.org/r/20221024083611.132588-1-amelie.delaunay@foss.st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-11-08 10:43:56 +05:30
Amelie Delaunay
723795173c dmaengine: stm32-dma: add support to trigger STM32 MDMA
STM32 MDMA can be triggered by STM32 DMA channels transfer complete.
The "request line number" triggering STM32 MDMA is the STM32 DMAMUX channel
id set by stm32-dmamux driver in dma_spec->args[3].

stm32-dma driver fills the struct stm32_dma_mdma_config used to configure
the MDMA with struct dma_slave_config .peripheral_config/.peripheral_size.

Signed-off-by: Amelie Delaunay <amelie.delaunay@foss.st.com>
Link: https://lore.kernel.org/r/20220829154646.29867-6-amelie.delaunay@foss.st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-09-04 22:48:02 +05:30
Amelie Delaunay
1c32d6c37c dmaengine: stm32-dma: use bitfield helpers
Use the FIELD_{GET,PREP}() helpers, instead of defining custom macros
implementing the same operations.

Signed-off-by: Amelie Delaunay <amelie.delaunay@foss.st.com>
Link: https://lore.kernel.org/r/20220829154646.29867-3-amelie.delaunay@foss.st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-09-04 22:48:02 +05:30
Amelie Delaunay
4dc36a53b8 dmaengine: stm32-dma: introduce 3 helpers to address channel flags
Channels 0 to 3 flags are described in DMA_LISR and DMA_LIFCR (L as Low).
Channels 4 to 7 flags are described in DMA_HISR and DMA_HIFCR (H as High).
Macro STM32_DMA_ISR(n) returns the interrupt status register offset for the
channel id (n).
Macro STM32_DMA_IFCR(n) returns the interrupt flag clear register offset
for the channel id (n).

If chan->id % 4 = 2 or 3, then its flags are left-shifted by 16 bits.
If chan->id % 4 = 1 or 3, then its flags are additionally left-shifted by 6
bits.
If chan->id % 4 = 0, then its flags are not shifted.
Macro STM32_DMA_FLAGS_SHIFT(n) returns the required shift to get or set the
channel flags mask.

Signed-off-by: Amelie Delaunay <amelie.delaunay@foss.st.com>
Link: https://lore.kernel.org/r/20220829154646.29867-2-amelie.delaunay@foss.st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-09-04 22:48:02 +05:30
Amelie Delaunay
099a9a94be dmaengine: stm32-dma: add device_pause/device_resume support
At any time, a DMA transfer can be suspended to be restarted later before
the end of the DMA transfer.

In order to restart from the point where the transfer was stopped,
DMA_SxNDTR has to be read after disabling the channel by clearing the EN
bit in DMA_SxCR register, to know the number of data items already
collected.
Peripheral and/or memory addresses have to be updated in order to adjust
the address pointers.
SxNDTR register has to be updated with the remaining number of data items
to be transferred (the value read when the channel was disabled).
Then the channel can be re-enabled to resume the transfer from the point
it was suspended.
If the channel was configured in circular or double-buffer mode, the
circular or double-buffer mode must be disabled before re-enabling the
channel to be able to reconfigure SxNDTR register and re-activate circular
or double-buffer mode on next Transfer Complete interrupt where channel
will be disabled by HW. This is due to the fact that on resume, re-writing
SxNDTR register value updates internal HW auto-reload data counter, and
then it truncates all next transfers after a pause/resume sequence.

Signed-off-by: Amelie Delaunay <amelie.delaunay@foss.st.com>
Link: https://lore.kernel.org/r/20220505115611.38845-5-amelie.delaunay@foss.st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-05-19 23:43:41 +05:30
Amelie Delaunay
baa1424314 dmaengine: stm32-dma: rename pm ops before dma pause/resume introduction
dmaengine framework offers device_pause and device_resume ops to pause an
on-going transfer and resume it later.
To avoid any misunderstanding with system sleep pm ops, rename pm ops into
stm32_dma_pm_suspend and stm32_dma_pm_resume.

Signed-off-by: Amelie Delaunay <amelie.delaunay@foss.st.com>
Link: https://lore.kernel.org/r/20220505115611.38845-4-amelie.delaunay@foss.st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-05-19 23:43:41 +05:30
Amelie Delaunay
ded6230691 dmaengine: stm32-dma: pass DMA_SxSCR value to stm32_dma_handle_chan_done()
stm32_dma_handle_chan_done() is called on Transfer Complete interrupt.
As DMA_SxSCR register is read in interrupt handler, pass the value as
parameter of stm32_dma_handle_chan_done(). Also return directly if
chan->desc is null to remove one ident level.
Then, stm32_dma_configure_next_sg() is doing something only if
Double-Buffer Mode (DBM) is enabled, so, check it is enabled prior calling
stm32_dma_configure_next_sg(), to remove one ident level in
stm32_dma_configure_next_sg().

Signed-off-by: Amelie Delaunay <amelie.delaunay@foss.st.com>
Link: https://lore.kernel.org/r/20220505115611.38845-3-amelie.delaunay@foss.st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-05-19 23:43:41 +05:30
Amelie Delaunay
db60a63eb6 dmaengine: stm32-dma: introduce stm32_dma_sg_inc to manage chan->next_sg
chan->next_sg is used to know which transfer will start after the ongoing
one. It is incremented for each new transfer, either on transfer start for
non-cyclic transfers, or on transfer complete interrupt for cyclic
transfers.
For cyclic transfer, when the last item is reached, chan->next_sg must be
reinitialized to the first item.

Signed-off-by: Amelie Delaunay <amelie.delaunay@foss.st.com>
Link: https://lore.kernel.org/r/20220505115611.38845-2-amelie.delaunay@foss.st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-05-19 23:43:40 +05:30
Amelie Delaunay
728f6c7833 dmaengine: stm32-dma: set dma_device max_sg_burst
Some stm32-dma consumers [1] rather use dma_get_slave_caps() to get
max_sg_burst of their DMA channel as dma_get_max_seg_size() is specific to
the DMA controller.
All stm32-dma channels have the same features so, don't need to implement
device_caps ops. Let dma_get_slave_caps() relies on dma_device
configuration.
That's why this patch sets dma_device max_sg_burst to the maximum segment
size, which is the maximum of data items that can be transferred without
software intervention.

[1] https://lore.kernel.org/lkml/20220110103739.118426-1-alain.volmat@foss.st.com/
    "media: stm32: dcmi: create a dma scatterlist based on DMA max_sg_burst value"

Signed-off-by: Amelie Delaunay <amelie.delaunay@foss.st.com>
Tested-by: Alain Volmat <alain.volmat@foss.st.com>
Link: https://lore.kernel.org/r/20220117091740.11064-1-amelie.delaunay@foss.st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2022-02-15 11:12:07 +05:30
Arnd Bergmann
2498363310 dmaengine: stm32-dma: avoid 64-bit division in stm32_dma_get_max_width
Using the % operator on a 64-bit variable is expensive and can
cause a link failure:

arm-linux-gnueabi-ld: drivers/dma/stm32-dma.o: in function `stm32_dma_get_max_width':
stm32-dma.c:(.text+0x170): undefined reference to `__aeabi_uldivmod'
arm-linux-gnueabi-ld: drivers/dma/stm32-dma.o: in function `stm32_dma_set_xfer_param':
stm32-dma.c:(.text+0x1cd4): undefined reference to `__aeabi_uldivmod'

As we know that we just want to check the alignment in
stm32_dma_get_max_width(), there is no need for a full division, and
using a simple mask is a faster replacement.

Same in stm32_dma_set_xfer_param(), change this to only allow burst
transfers if the address is a multiple of the length.
stm32_dma_get_best_burst just after will take buf_len into account to fix
burst in case of misalignment.

Fixes: b20fd5fa31 ("dmaengine: stm32-dma: fix stm32_dma_get_max_width")
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Amelie Delaunay <amelie.delaunay@foss.st.com>
Link: https://lore.kernel.org/r/20211103153312.41483-1-amelie.delaunay@foss.st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2021-11-09 11:20:35 +05:30
Amelie Delaunay
af229d2c25 dmaengine: stm32-dma: fix burst in case of unaligned memory address
Theorically, address pointers used by STM32 DMA must be chosen so as to
ensure that all transfers within a burst block are aligned on the address
boundary equal to the size of the transfer.
If this is always the case for peripheral addresses on STM32, it is not for
memory addresses if the user doesn't respect this alignment constraint.
To avoid a weird behavior of the DMA controller in this case (no error
triggered but data are not transferred as expected), force no burst.

Signed-off-by: Amelie Delaunay <amelie.delaunay@foss.st.com>
Link: https://lore.kernel.org/r/20211011094259.315023-4-amelie.delaunay@foss.st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2021-10-18 12:12:50 +05:30
Amelie Delaunay
b20fd5fa31 dmaengine: stm32-dma: fix stm32_dma_get_max_width
buf_addr parameter of stm32_dma_set_xfer_param function is a dma_addr_t.
We only need to check the remainder of buf_addr/max_width, so, no need to
use do_div and extra u64 addr. Use '%' instead.

Fixes: e0ebdbdcb4 ("dmaengine: stm32-dma: take address into account when computing max width")
Signed-off-by: Amelie Delaunay <amelie.delaunay@foss.st.com>
Link: https://lore.kernel.org/r/20211011094259.315023-3-amelie.delaunay@foss.st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2021-10-18 12:12:50 +05:30
Amelie Delaunay
79e40b06a4 dmaengine: stm32-dma: mark pending descriptor complete in terminate_all
To prevent accidental repeated completion, mark pending descriptor
complete in terminate_all. It can be the case when terminate_all is called
while no end of transfer interrupt occurs.

Signed-off-by: Amelie Delaunay <amelie.delaunay@foss.st.com>
Link: https://lore.kernel.org/r/20211011094259.315023-2-amelie.delaunay@foss.st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2021-10-18 12:12:50 +05:30
Vinod Koul
ffa179ae2a Merge branch 'fixes' into next 2021-08-02 12:34:48 +05:30
Amelie Delaunay
2b5b74054c dmaengine: stm32-dma: add alternate REQ/ACK protocol management
STM32 USART/UART is not managing correctly the default DMA REQ/ACK protocol
leading to possibly lock the DMA stream.
Default protocol consists in maintaining ACK signal up to the removal of
REQuest and the transfer completion.
In case of alternative REQ/ACK protocol, ACK de-assertion does not wait the
removal of the REQuest, but only the transfer completion.

This patch retrieves the need of the alternative protocol through the
device tree, and sets the protocol accordingly.
It also unwrap STM32_DMA_DIRECT_MODE_GET macro definition for consistency
with new STM32_DMA_ALT_ACK_MODE_GET macro definition.

Signed-off-by: Amelie Delaunay <amelie.delaunay@foss.st.com>
Link: https://lore.kernel.org/r/20210624093959.142265-3-amelie.delaunay@foss.st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2021-07-28 12:39:48 +05:30
Zhang Qilong
d54db74ad6 dmaengine: stm32-dma: Fix PM usage counter imbalance in stm32 dma ops
pm_runtime_get_sync will increment pm usage counter
even it failed. Forgetting to putting operation will
result in reference leak here. We fix it by replacing
it with pm_runtime_resume_and_get to keep usage counter
balanced.

Fixes: 48bc73ba14 ("dmaengine: stm32-dma: Add PM Runtime support")
Fixes: 05f8740a0e ("dmaengine: stm32-dma: add suspend/resume power management support")
Signed-off-by: Zhang Qilong <zhangqilong3@huawei.com>
Link: https://lore.kernel.org/r/20210607064640.121394-2-zhangqilong3@huawei.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2021-07-28 12:04:12 +05:30
Amelie Delaunay
e0ebdbdcb4 dmaengine: stm32-dma: take address into account when computing max width
DMA_SxPAR or DMA_SxM0AR/M1AR registers have to be aligned on PSIZE or MSIZE
respectively. This means that bus width needs to be forced to 1 byte when
computed width is not aligned with address.

Signed-off-by: Amelie Delaunay <amelie.delaunay@st.com>
Link: https://lore.kernel.org/r/20201120143320.30367-4-amelie.delaunay@st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2020-12-11 21:13:08 +05:30
Amelie Delaunay
5d4d4dfbda dmaengine: stm32-dma: clean channel configuration when channel is freed
When dma_channel_release is called, it means that the channel won't be used
anymore with the configuration it had. To ensure a future client can safely
use the channel after it has been released, clean the configuration done
when channel was requested.

Signed-off-by: Amelie Delaunay <amelie.delaunay@st.com>
Link: https://lore.kernel.org/r/20201120143320.30367-3-amelie.delaunay@st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2020-12-11 21:13:08 +05:30
Amelie Delaunay
a44d9d7245 dmaengine: stm32-dma: rework irq handler to manage error before xfer events
To better understand error that can be detected by the DMA controller,
manage the error flags before the transfer flags.
This way, it is possible to know if the FIFO error flag is set for an
over/underrun condition or a FIFO level error.
When a FIFO over/underrun condition occurs, the data is not lost because
peripheral request is not acknowledged by the stream until the over/
underrun condition is cleared. If this acknowledge takes too much time,
the peripheral itself may detect an over/underrun condition of its internal
buffer and data might be lost.
That's why in case the FIFO error flag is set, we check if the channel is
disabled or not, and if a Transfer Complete flag is set, which means that
the channel is disabled because of the end of transfer.
Because channel is disabled by hardware either by a FIFO level error, or by
an end of transfer.

Signed-off-by: Amelie Delaunay <amelie.delaunay@st.com>
Link: https://lore.kernel.org/r/20201120143320.30367-2-amelie.delaunay@st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2020-12-11 21:13:08 +05:30
Krzysztof Kozlowski
1c966e1d94 dmaengine: stm32: Simplify with dev_err_probe()
Common pattern of handling deferred probe can be simplified with
dev_err_probe().  Less code and the error value gets printed.

Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
Link: https://lore.kernel.org/r/20200828152637.16903-2-krzk@kernel.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2020-09-03 12:38:59 +05:30
Amelie Delaunay
955b17665d dmaengine: stm32-dma: direct mode support through device tree
Direct mode or FIFO mode is computed by stm32-dma driver. Add a way for
the user to force direct mode, by setting bit 2 in the bitfield value
specifying DMA features in the device tree.

Signed-off-by: Amelie Delaunay <amelie.delaunay@st.com>
Link: https://lore.kernel.org/r/20200422102904.1448-3-amelie.delaunay@st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2020-04-27 21:40:12 +05:30
Amelie Delaunay
d80cbef35b dmaengine: stm32-dma: use vchan_terminate_vdesc() in .terminate_all
To avoid race with vchan_complete, use the race free way to terminate
running transfer.

Move vdesc->node list_del in stm32_dma_start_transfer instead of in
stm32_mdma_chan_complete to avoid another race in vchan_dma_desc_free_list.

Signed-off-by: Amelie Delaunay <amelie.delaunay@st.com>
Link: https://lore.kernel.org/r/20200129153628.29329-9-amelie.delaunay@st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2020-02-25 11:15:05 +05:30
Amelie Delaunay
409ffc4d99 dmaengine: stm32-dma: fix sleeping function called from invalid context
This patch fixes BUG: sleeping function called from invalid context in
stm32_dma_disable_chan function.

The goal of this function is to force channel disable if it has not been
disabled by hardware. This consists in clearing STM32_DMA_SCR_EN bit and
read it as 0 to ensure the channel is well disabled and the last transfer
is over.

In previous implementation, the waiting loop was based on a do...while (1)
with a call to cond_resched to give the scheduler a chance to run a higher
priority process.

But in some conditions, stm32_dma_disable_chan can be called while
preemption is disabled, on a stm32_dma_stop call for example. So
cond_resched must not be used.

To avoid this, use readl_relaxed_poll_timeout_atomic to poll
STM32_DMA_SCR_EN bit cleared.

Signed-off-by: Amelie Delaunay <amelie.delaunay@st.com>
Link: https://lore.kernel.org/r/20200129153628.29329-8-amelie.delaunay@st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2020-02-25 11:15:05 +05:30
Amelie Delaunay
32ce108833 dmaengine: stm32-dma: add copy_align constraint
This patch adds copy_align property in accordance with hardware
restriction.

Signed-off-by: Ludovic Barre <ludovic.barre@st.com>
Signed-off-by: Amelie Delaunay <amelie.delaunay@st.com>
Link: https://lore.kernel.org/r/20200129153628.29329-7-amelie.delaunay@st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2020-02-25 11:15:05 +05:30
Amelie Delaunay
d7a9e42609 dmaengine: stm32-dma: use dma_set_max_seg_size to set the sg limit
This patch adds dma_set_max_seg_size to define sg dma constraint.
This constraint may be taken into account by client to scatter/gather
its buffer.

Signed-off-by: Ludovic Barre <ludovic.barre@st.com>
Signed-off-by: Amelie Delaunay <amelie.delaunay@st.com>
Link: https://lore.kernel.org/r/20200129153628.29329-6-amelie.delaunay@st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2020-02-25 11:15:05 +05:30
Pierre-Yves MORDRET
22a0bb297c dmaengine: stm32-dma: enable descriptor_reuse
Enable client to resubmit already processed descriptors
in order to save descriptor creation time.

Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Amelie Delaunay <amelie.delaunay@st.com>
Link: https://lore.kernel.org/r/20200129153628.29329-5-amelie.delaunay@st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2020-02-25 11:15:05 +05:30
Etienne Carriere
615eee2c45 dmaengine: stm32-dma: driver defers probe for reset
Change STM32 DMA driver to defer its probe operation when reset
controller is expected but has not been probed yet when DMA
device is probed.

Changes error traces when failing to get a system resource so that
it is not printed on failure with deferred probing.

Signed-off-by: Etienne Carriere <etienne.carriere@st.com>
Signed-off-by: Amelie Delaunay <amelie.delaunay@st.com>
Link: https://lore.kernel.org/r/20200129153628.29329-4-amelie.delaunay@st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2020-02-25 11:15:05 +05:30
Etienne Carriere
8cf1e0fc50 dmaengine: stm32-dma: use reset controller only at probe time
Remove reset controller reference from device instance since it is
used only at probe time.

Signed-off-by: Etienne Carriere <etienne.carriere@st.com>
Signed-off-by: Amelie Delaunay <amelie.delaunay@st.com>
Link: https://lore.kernel.org/r/20200129153628.29329-3-amelie.delaunay@st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2020-02-25 11:15:04 +05:30
Pierre-Yves MORDRET
05f8740a0e dmaengine: stm32-dma: add suspend/resume power management support
Add suspend/resume power management relying on PM Runtime engine.

Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Amelie Delaunay <amelie.delaunay@st.com>
Link: https://lore.kernel.org/r/20200129153628.29329-2-amelie.delaunay@st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2020-02-25 11:15:04 +05:30
Gustavo A. R. Silva
402096cb5b dmaengine: stm32-dma: Use struct_size() helper
One of the more common cases of allocation size calculations is finding
the size of a structure that has a zero-sized array at the end, along
with memory for some number of elements for that array. For example:

struct stm32_dma_desc {
	...
        struct stm32_dma_sg_req sg_req[];
};

Make use of the struct_size() helper instead of an open-coded version
in order to avoid any potential type mistakes.

So, replace the following function:

static struct stm32_dma_desc *stm32_dma_alloc_desc(u32 num_sgs)
{
       return kzalloc(sizeof(struct stm32_dma_desc) +
                      sizeof(struct stm32_dma_sg_req) * num_sgs, GFP_NOWAIT);
}

with:

kzalloc(struct_size(desc, sg_req, num_sgs), GFP_NOWAIT)

This code was detected with the help of Coccinelle.

Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Link: https://lore.kernel.org/r/20190830161423.GA3483@embeddedor
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-09-04 10:25:08 +05:30
Stephen Boyd
e17be6e1b7 dmaengine: Remove dev_err() usage after platform_get_irq()
We don't need dev_err() messages when platform_get_irq() fails now that
platform_get_irq() prints an error message itself when something goes
wrong. Let's remove these prints with a simple semantic patch.

// <smpl>
@@
expression ret;
struct platform_device *E;
@@

ret =
(
platform_get_irq(E, ...)
|
platform_get_irq_byname(E, ...)
);

if ( \( ret < 0 \| ret <= 0 \) )
{
(
-if (ret != -EPROBE_DEFER)
-{ ...
-dev_err(...);
-... }
|
...
-dev_err(...);
)
...
}
// </smpl>

While we're here, remove braces on if statements that only have one
statement (manually).

Cc: Vinod Koul <vkoul@kernel.org>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: dmaengine@vger.kernel.org
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Stephen Boyd <swboyd@chromium.org>
Link: https://lore.kernel.org/r/20190730181557.90391-11-swboyd@chromium.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-07-31 20:50:53 +05:30
Linus Torvalds
47ebe00b68 dmaengine updates for v5.3-rc1
- Add support in dmaengine core to do device node checks for DT devices and
    update bunch of drivers to use that and remove open coding from drivers
  - New driver/driver support for new hardware, namely:
    - MediaTek UART APDMA
    - Freescale i.mx7ulp edma2
    - Synopsys eDMA IP core version 0
    - Allwinner H6 DMA
  - Updates to axi-dma and support for interleaved cyclic transfers
  - Greg's debugfs return value check removals on drivers
  - Updates to stm32-dma, hsu, dw, pl330, tegra drivers
 -----BEGIN PGP SIGNATURE-----
 
 iQIcBAABAgAGBQJdLKxYAAoJEHwUBw8lI4NHsH8P/AqYZpUlLthe5L4qItzM1Uf0
 HqxsJYs0xworjSRml8uptx/TzjIgJnJfEk2PV5VA+0zJNz/HnH7lDH85wKDx1Ydl
 AatUuyAFRO3GZOup/hY0AEIPhoIMdg/3zS2aapjJmaEZCVK2eVKmcj0KMvO5g0cw
 tsmXm3O0xd2Na1ToslNyYgFfCn8ortuAeoKiXJxhivMbGjRfw4LW/RPgS17Vspvh
 mEuxNXFWAZ+DorgPF5BmDPZ+LXcGgCXGNIoj64W+VHaXU5yXnlky+6/0f7cEcFEd
 yl3hjXVwyAq5zIItIOmiuozZidi5yfoizXg4S2ZD3P4xXKZ5OZ9Gf/0SMyXUIErU
 pwGxo6ZgsBcEpAHtqySELQedttttID+jYYeWU6oDr2LOy3W3F7AHOEGg9l9ZllLh
 gRdIoz3PrMK1wy/9Ytl37xklZyBk+HJLkeoIAvjrNgNJ1YRKqcysUCwsmqO7SG3N
 HnIGx74sG8ChljT/yX5pElq3ip6qLdb4pJcsfxKJ9VSxsTZ3JNINGNQtvI19hKR/
 6sn/c1Rb5/S1WxINGr+2FxChxXF8OESCN6GIEu6mNYVBzQnNPzwgPxfAGCqdoOOH
 mqXXgYNePMaBGYXBkdgvP1CnqenRRmTYo/1L4QmI4Mve4xpd5zhx5cZt9FlQJ2Im
 /hVT8gZ6bIrutsVOy4rg
 =R+aC
 -----END PGP SIGNATURE-----

Merge tag 'dmaengine-5.3-rc1' of git://git.infradead.org/users/vkoul/slave-dma

Pull dmaengine updates from Vinod Koul:

 - Add support in dmaengine core to do device node checks for DT devices
   and update bunch of drivers to use that and remove open coding from
   drivers

 - New driver/driver support for new hardware, namely:
     - MediaTek UART APDMA
     - Freescale i.mx7ulp edma2
     - Synopsys eDMA IP core version 0
     - Allwinner H6 DMA

 - Updates to axi-dma and support for interleaved cyclic transfers

 - Greg's debugfs return value check removals on drivers

 - Updates to stm32-dma, hsu, dw, pl330, tegra drivers

* tag 'dmaengine-5.3-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (68 commits)
  dmaengine: Revert "dmaengine: fsl-edma: add i.mx7ulp edma2 version support"
  dmaengine: at_xdmac: check for non-empty xfers_list before invoking callback
  Documentation: dmaengine: clean up description of dmatest usage
  dmaengine: tegra210-adma: remove PM_CLK dependency
  dmaengine: fsl-edma: add i.mx7ulp edma2 version support
  dt-bindings: dma: fsl-edma: add new i.mx7ulp-edma
  dmaengine: fsl-edma-common: version check for v2 instead
  dmaengine: fsl-edma-common: move dmamux register to another single function
  dmaengine: fsl-edma: add drvdata for fsl-edma
  dmaengine: Revert "dmaengine: fsl-edma: support little endian for edma driver"
  dmaengine: rcar-dmac: Reject zero-length slave DMA requests
  dmaengine: dw: Enable iDMA 32-bit on Intel Elkhart Lake
  dmaengine: dw-edma: fix semicolon.cocci warnings
  dmaengine: sh: usb-dmac: Use [] to denote a flexible array member
  dmaengine: dmatest: timeout value of -1 should specify infinite wait
  dmaengine: dw: Distinguish ->remove() between DW and iDMA 32-bit
  dmaengine: fsl-edma: support little endian for edma driver
  dmaengine: hsu: Revert "set HSU_CH_MTSR to memory width"
  dmagengine: pl330: add code to get reset property
  dt-bindings: pl330: document the optional resets property
  ...
2019-07-17 09:55:43 -07:00
Thomas Gleixner
af873fcece treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 194
Based on 1 normalized pattern(s):

  license terms gnu general public license gpl version 2

extracted by the scancode license scanner the SPDX license identifier

  GPL-2.0-only

has been chosen to replace the boilerplate/reference in 161 file(s).

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Allison Randal <allison@lohutok.net>
Reviewed-by: Alexios Zavras <alexios.zavras@intel.com>
Reviewed-by: Steve Winslow <swinslow@gmail.com>
Reviewed-by: Richard Fontana <rfontana@redhat.com>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190528170027.447718015@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-05-30 11:29:22 -07:00
Amelie Delaunay
e40543931f dmaengine: stm32-dma: Fix redundant call to platform_get_irq
Commit c6504be539 ("dmaengine: stm32-dma: Fix unsigned variable compared
with zero") duplicated the call to platform_get_irq.
So remove the first call to platform_get_irq.

Fixes: c6504be539 ("dmaengine: stm32-dma: Fix unsigned variable compared with zero")
Signed-off-by: Amelie Delaunay <amelie.delaunay@st.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-05-21 10:09:24 +05:30
Arnaud Pouliquen
2a4885abf5 dmaengine: stm32-dma: fix residue calculation in stm32-dma
In double buffer mode, during residue calculation, the DMA can
automatically switch to the next transfer. Indeed the CT bit that
gives position in the double buffer can has been updated by the
hardware, during calculation.
In this case the SxNDTR register value can not be trusted.
If a transition is detected we consider that the DMA has switched to
the beginning of next sg.

Signed-off-by: Arnaud Pouliquen <arnaud.pouliquen@st.com>
Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-05-04 15:46:58 +05:30
Vinod Koul
c6504be539 dmaengine: stm32-dma: Fix unsigned variable compared with zero
Commit f4fd2ec08f: ("dmaengine: stm32-dma: use platform_get_irq()") used
unsigned variable irq to store the results and check later for negative
errors, so update the code to use signed variable for this

Fixes: f4fd2ec08f ("dmaengine: stm32-dma: use platform_get_irq()")
Reported-by: kbuild test robot <lkp@intel.com>
Reported-by: Julia Lawall <julia.lawall@lip6.fr>
Acked-by: Julia Lawall <julia.lawall@lip6.fr>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-04-29 09:59:07 +05:30
Fabien Dessenne
f4fd2ec08f dmaengine: stm32-dma: use platform_get_irq()
platform_get_resource(pdev, IORESOURCE_IRQ) is not recommended for
requesting IRQ's resources, as they can be not ready yet. Using
platform_get_irq() instead is preferred for getting IRQ even if it was
not retrieved earlier.

Signed-off-by: Fabien Dessenne <fabien.dessenne@st.com>
Reviewed-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-04-26 22:33:34 +05:30
Pierre-Yves MORDRET
48bc73ba14 dmaengine: stm32-dma: Add PM Runtime support
Use pm_runtime engine for clock management purpose.

Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07 09:52:24 +05:30
Pierre-Yves MORDRET
ca4c72c01e dmaengine: stm32-dma: check FIFO error interrupt enable
For avoiding false FIFO detection, check FIFO Error interrupt is
enabled prior raising any errors.
This will prevent having spurious FIFO error where it shouldn't.

Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2019-01-07 09:52:24 +05:30
Pierre-Yves MORDRET
cc832dc8e3 dmaengine: stm32-dma: check whether length is aligned on FIFO threshold
When a period length is not multiple of FIFO some data may be stuck
within FIFO.

Burst/FIFO Threshold/Period or buffer length check has to be hardened

In any case DMA will grant any request from client but will degraded
any parameters whether awkward.

Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2018-10-02 20:32:15 +05:30
Benjamin Gaignard
90ec93cb6b dmaengine: stm32: replace "%p" with "%pK"
The format specifier "%p" can leak kernel addresses.
Use "%pK" instead.

Signed-off-by: Benjamin Gaignard <benjamin.gaignard@st.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
2018-07-09 23:01:57 +05:30
Pierre Yves MORDRET
9df3bd5520 dmaengine: stm32-dma: properly mask irq bits
A single register of the controller holds the information for four dma
channels.
The functions stm32_dma_irq_status() don't mask the relevant bits after
the shift, thus adjacent channel's status is also reported in the returned
value.
Fixed by masking the value before returning it.

Similarly, the function stm32_dma_irq_clear() don't mask the input value
before shifting it, thus an incorrect input value could disable the
interrupts of adjacent channels.
Fixed by masking the input value before using it.

Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Antonio Borneo <borneo.antonio@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2018-04-04 11:49:36 +05:30
Pierre Yves MORDRET
80a76952a5 dmaengine: stm32-dma: fix max items per transfer
Having 0 in item counter register is valid and stands for a "No or Ended
transfer". Therefore valid transfer starts from @+0 to @+0xFFFE leading to
unaligned scatter gather at boundary. Thus it's safer to round down this
value on its FIFO size (16 Bytes).

Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2018-04-04 11:49:36 +05:30
Pierre Yves MORDRET
c2d86b1cd6 dmaengine: stm32-dma: fix DMA IRQ status handling
Update the way Transfer Complete and Half Transfer Complete status are
acknowledge. Even if HTI is not enabled its status is shown when reading
registers, driver has to clear it gently and not raise an error.

Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2018-04-04 11:49:36 +05:30
Pierre Yves MORDRET
a2b6103b7a dmaengine: stm32-dma: Improve memory burst management
This patch improves memory burst capability using best burst size
according to transferred buffer size from/to memory.

>From now on, memory burst is not necessarily same as with peripheral
burst one and fifo threshold is directly managed by this driver in order
to fit with computed memory burst.

Signed-off-by: M'boumba Cedric Madianga <cedric.madianga@gmail.com>
Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2018-04-04 11:49:36 +05:30
Pierre Yves MORDRET
249d553118 dmaengine: stm32-dma: fix typo and reported checkpatch warnings
Fix typo in a comment and solved reported checkpatch warnings.

Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2018-04-04 11:49:36 +05:30
Pierre Yves MORDRET
e57cb3b3f1 dmaengine: stm32-dma: fix incomplete configuration in cyclic mode
When in cyclic mode, the configuration is updated after having started the
DMA hardware (STM32_DMA_SCR_EN) leading to incomplete configuration of
SMxAR registers.

Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Hugues Fruchet <hugues.fruchet@st.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2018-04-04 11:49:36 +05:30
Pierre Yves MORDRET
951f44cb6c dmaengine: stm32-dma: threshold manages with bitfield feature
>From now on, DMA bitfield is to manage DMA FIFO Threshold.

Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2018-04-04 11:49:36 +05:30
Colin Ian King
041cf7e064 dmaengine: stm32-dma: fix up error dev_err message
Trivial fix to spelling mistake and make channel plural.

Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-03-06 10:41:24 +05:30
Linus Torvalds
97a229f907 dmaengine updates for 4.11-rc1
This time we fairly boring and bit small update.
 
 - Support for Intel iDMA 32-bit hardware
 - deprecate broken support for channel switching in async_tx
 - bunch of updates on stm32-dma
 - Cyclic support for zx dma and making in generic zx dma driver
 - Small updates to bunch of other drivers
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQIcBAABAgAGBQJYrI1RAAoJEHwUBw8lI4NHMH4QALbxsOzil+Sl5gq27xK02ox3
 8NGkVdAZgUlMI0m7VnPrdenhLy+t7wz383RM0/DUJfHqJItwxdR+895GRwe5TJTl
 PJJMrrdURv6k24ZqUq1b8qKN51YOjp9aVX6sjKi531fcf1Cmj6nuquo8/gIlZtQC
 GnLPQGqgK/i5EX/LGQwuHw4j6hESsLGu1vCPdJBSRGzlzd6T4rbmc2ApUjSB1PRO
 uhmLrORdkbQnxnt1mUYDlEVwGy/0wVNML89k/SX1g/70NI2kvVUn2NX5Aq7+C+vl
 iIdBfYXuPDnhygrze8jYNd6acc9MWyKp7uYelTtm+m4tvVMAD2UJS/eXgbgkcyLj
 XoGGkfjDLlK40gQDqnWiNfLi+Dn899ZEwXvCZNsQa72RS32wiatu3ZG39x+a55mK
 XISolM57iph975C4manY2sxLfOZmM5qmXg8+JGMppine5U2kmID44Y72jYKLNeUU
 EjMCVeZidx7LFkAR8BfA5pZt2BBIooHcncIwD9MW9KsEMPqzMcjqg9Wvxn/hSIUu
 VGPVnBZrz/IBSeN9sqMmXWVY0kI+teC7bFZHecZBOUylOyxcQ9ZBHT7Ie2wM2piM
 fY4UYWTtn9p/YOrfE32iSn226ga6a0Pk1kEfWgOQ/KXmOd20D33z0NYY9xOQ/CXC
 aT+Skn10zjYZbfU0MCp5
 =pcdH
 -----END PGP SIGNATURE-----

Merge tag 'dmaengine-4.11-rc1' of git://git.infradead.org/users/vkoul/slave-dma

Pull dmaengine updates from Vinod Koul:
 "This time we fairly boring and bit small update.

   - Support for Intel iDMA 32-bit hardware
   - deprecate broken support for channel switching in async_tx
   - bunch of updates on stm32-dma
   - Cyclic support for zx dma and making in generic zx dma driver
   - Small updates to bunch of other drivers"

* tag 'dmaengine-4.11-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (29 commits)
  async_tx: deprecate broken support for channel switching
  dmaengine: rcar-dmac: Widen DMA mask to 40 bits
  dmaengine: sun6i: allow build on ARM64 platforms (sun50i)
  dmaengine: Provide a wrapper for memcpy operations
  dmaengine: zx: fix build warning
  dmaengine: dw: we do support Merrifield SoC in PCI mode
  dmaengine: dw: add support of iDMA 32-bit hardware
  dmaengine: dw: introduce register mappings for iDMA 32-bit
  dmaengine: dw: introduce block2bytes() and bytes2block()
  dmaengine: dw: extract dwc_chan_pause() for future use
  dmaengine: dw: replace convert_burst() with one liner
  dmaengine: dw: register IRQ and DMA pool with instance ID
  dmaengine: dw: Fix data corruption in large device to memory transfers
  dmaengine: ste_dma40: indicate granularity on channels
  dmaengine: ste_dma40: indicate directions on channels
  dmaengine: stm32-dma: Add error messages if xlate fails
  dmaengine: dw: pci: remove LPE Audio DMA ID
  dmaengine: stm32-dma: Add max_burst support
  dmaengine: stm32-dma: Add synchronization support
  dmaengine: stm32-dma: Fix residue computation issue in cyclic mode
  ...
2017-02-21 17:06:22 -08:00