the device's max_burst to 16777215 (EN is 24bit unsigned value) so
clients can take this into consideration when setting up the transfer.
During slave transfer preparation check if the requested maxburst is valid.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Cc: Russell King <linux@armlinux.org.uk>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
When the port_window support was verified it was done on setup where only
the MEM_TO_DEV direction was enabled. This got un-noticed and thus only
this direction worked.
Now that I have managed to get a setup to verify both direction it turned
out that the setup was incorrect:
omap_desc members are settings for the slave port while the omap_sg members
apply to the memory side of the sDMA setup.
Fixes: 527a275913 ("dmaengine: omap-dma: Fix the port_window support")
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: dmaengine@vger.kernel.org
Cc: dan.j.williams@intel.com
Cc: vinod.koul@intel.com
Tested-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Bin Liu <b-liu@ti.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
We do not yet have users of port_window. The following errors were found
when converting the tusb6010_omap.c musb driver:
- The peripheral side must have SRC_/DST_PACKED disabled
- when configuring the burst for the peripheral side the memory side
configuration were overwritten: d->csdp = ... -> d->csdp |= ...
- The EI and FI were configured for the wrong sides of the transfers.
With these changes and the converted tus6010_omap.c I was able to verify
that things are working as they expected to work.
Fixes: 201ac4861c ("dmaengine: omap-dma: Support for slave devices with data port window")
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The original patch did not done what it was supposed to be doing and even
worst it broke legacy boot (OMAP1).
The lch_map size should be the number of available logical channels in sDMA
and the od->dma_requests should store the number of available DMA request
lines usable in sDMA.
In legacy mode we do not have a way to get the DMA request count, in that
case we use OMAP_SDMA_REQUESTS (127), despite the fact that OMAP1510 have
only 31 DMA request line.
Fixes: 2d1a9a946f ("dmaengine: omap-dma: Dynamically allocate memory for lch_map")
Reported-by: Aaro Koskinen <aaro.koskinen@iki.fi>
Cc: stable@vger.kernel.org # v4.9
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Tested-by: Aaro Koskinen <aaro.koskinen@iki.fi>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Based on the src/dst_port_window_size - if it is set - configure the DMA
channel to use double indexing in order to be able to loop within the
address window.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
can_pause is not initialized so it contains garbage. Fix this
by setting it to false.
Found using static analysis with cppcheck.
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
This DMA driver is used by 8250-omap on DRA7-evm. There is one
requirement that is to pause a transfer. This is currently used on the RX
side. It is possible that the UART HW aborted the RX (UART's RX-timeout)
but the DMA controller starts the transfer shortly after.
Before we can manually purge the FIFO we need to pause the transfer,
check how many bytes it already received and terminate the transfer
without it making any progress.
From testing on the TX side it seems that it is possible that we invoke
pause once the transfer has completed which is indicated by the missing
CCR_ENABLE bit but before the interrupt has been noticed. In that case the
interrupt will come even after disabling it.
The AM572x manual says that we have to wait for the CCR_RD_ACTIVE &
CCR_WR_ACTIVE bits to be gone before programming it again here is the
drain loop. Also it looks like without the drain the TX-transfer makes
sometimes progress.
One note: The pause + resume combo is broken because after resume the
the complete transfer will be programmed again. That means the already
transferred bytes (until the pause event) will be sent again. This is
currently not important for my UART user because it does only pause +
terminate.
v3…v4:
- update subject line.
v2…v3:
- rephrase the comment based on Russell's information / feedback.
v1…v2:
- move the drain loop into omap_dma_drain_chan() instead of having it
twice.
- allow pause only for DMA_DEV_TO_MEM transfers if non-cyclic. Add a
comment why DMA_MEM_TO_DEV not allowed.
- clear pause on terminate_all. Otherwise pause() + terminate_all()
will keep the pause bit set and we can't pause the following
transfer.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
[vigneshr@ti.com: drain channel only when buffering is on, rebase to v4.8]
Signed-off-by: Vignesh R <vigneshr@ti.com>
Acked-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Enable the burst and data pack modes for the scatter-gather
in order to improve the throughput of the data transfers.
The improvement has been verified with MMC HS200 mode in
the DRA72 EVM using the iozone tool to compare the read
throughput (in kB/s) with and without burst/pack for
different reclens (in kB).
With
reclen Baseline sDMA burst/pack
------ -------- ---------------
64 46568 50820
128 57564 63413
256 65634 74937
512 72427 83483
1024 74563 84504
2048 76265 86079
4096 78045 87335
8192 78989 88154
16384 81265 91034
Signed-off-by: Misael Lopez Cruz <misael.lopez@ti.com>
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The type of CDEI, CSEI, CDFI and CSFI is signed.
This did not caused issue so far as we only use unsigned values.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
sDMA in OMAP3630 or newer SoC have support for LinkedList transfer. When
LinkedList or Descriptor load feature is present we can create the
descriptors for each and program sDMA to walk through the list of
descriptors instead of the current way of sDMA stop, sDMA reconfiguration
and sDMA start after each SG transfer.
By using LinkedList transfer in sDMA the number of DMA interrupts will
decrease dramatically.
Booting up the board with filesystem on SD card for example:
W/o LinkedList support:
27: 4436 0 WUGEN 13 Level omap-dma-engine
Same board/filesystem with this patch:
27: 1027 0 WUGEN 13 Level omap-dma-engine
Or copying files from SD card to eMCC:
2.1G /usr/
232001
W/o LinkedList we see ~761069 DMA interrupts.
With LinkedList support it is down to ~269314 DMA interrupts.
With the decreased DMA interrupt number the CPU load is dropping
significantly as well.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Instead of accessing the array via index, take the pointer first and use
it to set up the omap_sg struct.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Print the same information the driver prints when allocating the channel
resources regarding to the sDMA channel.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
On OMAP1 platforms we do not have 32 channels available. Allocate the
lch_map based on the available channels. This way we are not going to have
more visible channels then it is available on the platform.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Flatten the indentation level of the function which gives better view on
the cases we handle here.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
We can drop the (sg)idx parameter for the omap_dma_start_sg() function and
increment the sgidx inside of the same function.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Initial support for interleaved transfer with sDMA.
The implementation only supports DMA_MEM_TO_MEM and frame_size must be 1.
sDMA needs to be configured for double indexing when ICG is needed.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
dmaengine device should explicitly call devm_free_irq() when using
devm_request_irq().
The irq is still ON when devices remove is executed and irq should be
quiesced before remove is completed.
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Acked-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
If the client queues up more transfers the driver will not able to move to
the next transfer without knowing that the previous descriptor is
completed.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
When based on the CCR_ENABLE bit the channel is stopped we should not call
omap_dma_callback(), only change the return value to DMA_COMPLETE. Client
drivers will do the right thing to clean up the channel after the transfer
has been completed.
Check the CCR_ENABLE only if the channel is running and not paused since
pause in sDMA means that the channel is stopped.
This will fix one hard to reproduce race condition when the channel is
terminated during transfer (affecting cyclic operation).
Fixes: 1a7cf7b26f ("dmaengine: omap-dma: Handle cases when the channel is polled for completion")
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
We need the callback to support the dmaengine_terminate_sync().
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Add support for providing device to filter_fn mapping so client drivers
can switch to use the dma_request_chan() API.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
When a DMA client driver decides that it is not providing callback for
completion of a transfer (and/or does not set the DMA_PREP_INTERRUPT) but
it will poll the status of the transfer (in case of short memcpy for
example) we will not get interrupt for the completion of the transfer and
will not mark the transaction as done.
Check the channel enable bit in the CCR when the status is queried and if
the channel is no longer active, we call the omap_dma_callback() to handle
the transfer completion.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The use of tasklet to actually start the DMA transfer slightly decreases the
DMA throughput since it adds small scheduling delay when the transfer is
started. In normal use, even with high I/O load the tasklet would start
one transaction at a time, however running the DMAtest for memcpy on all
available channels will cause the tasklet to start about 15 transfers.
The performance numbers on OMAP4 PandaBoard-es (test_buf_size = 6553):
With the tasklet:
dmatest: dma0chan30-copy: summary 5000 tests, 0 failures 186 iops 593 KB/s (0)
dmatest: dma0chan8-copy0: summary 5000 tests, 0 failures 184 iops 584 KB/s (0)
dmatest: dma0chan13-copy: summary 5000 tests, 0 failures 184 iops 585 KB/s (0)
dmatest: dma0chan12-copy: summary 5000 tests, 0 failures 184 iops 585 KB/s (0)
dmatest: dma0chan7-copy0: summary 5000 tests, 0 failures 183 iops 581 KB/s (0)
With this patch (no tasklet):
dmatest: dma0chan4-copy0: summary 5000 tests, 0 failures 199 iops 644 KB/s (0)
dmatest: dma0chan5-copy0: summary 5000 tests, 0 failures 199 iops 645 KB/s (0)
dmatest: dma0chan6-copy0: summary 5000 tests, 0 failures 199 iops 637 KB/s (0)
dmatest: dma0chan24-copy: summary 5000 tests, 0 failures 199 iops 638 KB/s (0)
dmatest: dma0chan16-copy: summary 5000 tests, 0 failures 199 iops 638 KB/s (0)
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The for_each_sg() macro's last parameter is inteded to be used as counter.
We can use 'i' instead of 'j' within the loop for indexes.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
During mem copy both src and dst position moves at the same pace. Check the
dst position for progress reporting.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Tested-by: Tomi Valkeinen <tomi.valkeinen@ti.com>
Signed-off-by: Jyri Sarha <jsarha@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The L3 throughput can be higher than expected when packed access
is not enabled. The ratio depends on the number of bytes in a
transaction and the EMIF interface width.
The throughput was measured for the following settings/cases:
* Case 1: Burst size of 64 bytes, packed access disabled
* Case 2: Burst size of 64 bytes, packed access enabled
* Case 3: Burst disabled, packed access disabled
Throughput measurements were done during McASP-based audio
playback on the Jacinto6 EVM using the omapconf tool [1]:
$ omapconf trace bw -m sdma_rd
---------------------------------------------------------
Throughput (MB/s)
Audio parameters Case 1 Case 2 Case 3
---------------------------------------------------------
44.1kHz, 16-bits, stereo 1.41 0.18 1.41
44.1kHz, 32-bits, stereo 1.41 0.35 1.41
44.1kHz, 16-bits, 4-chan 2.82 0.35 2.82
44.1kHz, 16-bits, 6-chan 4.23 0.53 4.23
44.1kHz, 16-bits, 8-chan 5.64 0.71 5.64
---------------------------------------------------------
From above measurements, case 2 is the only one that delivers
the expected throughput for the given audio parameters. For
that reason, the packed accesses are now enabled.
It's worth to mention that packed accesses cannot be enabled
for all addressing modes. In cyclic transfers, it can be
enabled in the source for MEM_TO_DEV and in dest for DEV_TO_MEM,
as they use post-increment mode which supports packed accesses.
Peter Ujfalusi:
From the TRM regarding to this:
"NOTE: Except in the constant addressing mode, the source or
destination must be specified as packed for burst transactions
to occur."
So w/o the packed setting the burst on the MEM side was not
enabled, this explains the numbers.
[1] https://github.com/omapconf/omapconf
Signed-off-by: Misael Lopez Cruz <misael.lopez@ti.com>
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Since the mapping between the hardware request lines and channels has been
removed it no longer make sense to have too many channels.
Set the number of channels to match with the number of logical channels
supported by sDMA.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Do not direct map the virtual channels to sDMA request number. When the
sDMA is behind of a crossbar this direct mapping can cause situations when
certain channel can not be requested since the crossbar request number
will no longer match with the sDMA request line.
The direct mapping for virtual channels with HW request lines will make it
harder to implement MEM_TO_MEM mode for the driver.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Use the dma-requests property from DT to get the number of DMA requests.
In case of legacy boot or failure to find the property, use the default
127 as number of requests.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Instead of magic numbers in the code, use define for number of logical DMA
channels and DMA requests.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The sDMA controller is capable of performing memory copy operation. It need
to be configured to software triggered mode and without HW synchronization.
The sDMA can copy data which is aligned to 8, 16 or 32 bits.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
In omap_dma_start_desc the vdesc->node is removed from the virt-dma
framework managed lists (to be precise from the desc_issued list).
If a terminate_all comes before the transfer finishes the omap_desc will
not be freed up because it is not in any of the lists and we stopped the
DMA channel so the transfer will not going to complete.
There is no special sequence for leaking memory when using cyclic (audio)
transfer: with every start and stop of a cyclic transfer the driver leaks
struct omap_desc worth of memory.
Free up the allocated memory directly in omap_dma_terminate_all() since the
framework will not going to do that for us.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
CC: <stable@vger.kernel.org>
CC: <linux-omap@vger.kernel.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Now that the generic slave caps code can make use of the device assigned
capabilities, instead of relying on a callback to be implemented.
Make use of this code.
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Split the device_control callback of the TI OMAP DMA driver to make use
of the newly introduced callbacks, that will eventually be used to retrieve
slave capabilities.
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The dmaengine header abbreviates destination as at least two different strings.
Make a coherent use of a single one.
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Acked-by: Mark Brown <broonie@kernel.org>
Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Acked-by: Stephen Warren <swarren@wwwdotorg.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Here's the set of driver core patches for 3.19-rc1.
They are dominated by the removal of the .owner field in platform
drivers. They touch a lot of files, but they are "simple" changes, just
removing a line in a structure.
Other than that, a few minor driver core and debugfs changes. There are
some ath9k patches coming in through this tree that have been acked by
the wireless maintainers as they relied on the debugfs changes.
Everything has been in linux-next for a while.
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iEYEABECAAYFAlSOD20ACgkQMUfUDdst+ylLPACg2QrW1oHhdTMT9WI8jihlHVRM
53kAoLeteByQ3iVwWurwwseRPiWa8+MI
=OVRS
-----END PGP SIGNATURE-----
Merge tag 'driver-core-3.19-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core
Pull driver core update from Greg KH:
"Here's the set of driver core patches for 3.19-rc1.
They are dominated by the removal of the .owner field in platform
drivers. They touch a lot of files, but they are "simple" changes,
just removing a line in a structure.
Other than that, a few minor driver core and debugfs changes. There
are some ath9k patches coming in through this tree that have been
acked by the wireless maintainers as they relied on the debugfs
changes.
Everything has been in linux-next for a while"
* tag 'driver-core-3.19-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core: (324 commits)
Revert "ath: ath9k: use debugfs_create_devm_seqfile() helper for seq_file entries"
fs: debugfs: add forward declaration for struct device type
firmware class: Deletion of an unnecessary check before the function call "vunmap"
firmware loader: fix hung task warning dump
devcoredump: provide a one-way disable function
device: Add dev_<level>_once variants
ath: ath9k: use debugfs_create_devm_seqfile() helper for seq_file entries
ath: use seq_file api for ath9k debugfs files
debugfs: add helper function to create device related seq_file
drivers/base: cacheinfo: remove noisy error boot message
Revert "core: platform: add warning if driver has no owner"
drivers: base: support cpu cache information interface to userspace via sysfs
drivers: base: add cpu_device_create to support per-cpu devices
topology: replace custom attribute macros with standard DEVICE_ATTR*
cpumask: factor out show_cpumap into separate helper function
driver core: Fix unbalanced device reference in drivers_probe
driver core: fix race with userland in device_add()
sysfs/kernfs: make read requests on pre-alloc files use the buffer.
sysfs/kernfs: allow attributes to request write buffer be pre-allocated.
fs: sysfs: return EGBIG on write if offset is larger than file size
...
chanctnt is already filled by dma_async_device_register, which uses the channel
list to know how much channels there is.
Since it's already filled, we can safely remove it from the drivers' probe
function.
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
When the audio stream is paused or suspended we stop the sDMA and when it
is unpaused/resumed we start the channel without reconfiguring it.
The omap_dma_stop() clears the link configuration when we pause the dma, but
it is not setting it back on start. This will result only one audio buffer
to be played back and the DMA will stop, since the linking is disabled.
We need to restore the CLINK_CTRL register in case of resume.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Add mb() call to resume path to ensure the necessary barrier.
Resume can happen after waking up from suspend for example.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
The argument is always set to NULL and never used. Remove it.
Signed-off-by: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Pull slave-dmaengine updates from Vinod Koul:
- New driver for Qcom bam dma
- New driver for RCAR peri-peri
- New driver for FSL eDMA
- Various odd fixes and updates thru the subsystem
* 'for-linus' of git://git.infradead.org/users/vkoul/slave-dma: (29 commits)
dmaengine: add Qualcomm BAM dma driver
shdma: add R-Car Audio DMAC peri peri driver
dmaengine: sirf: enable generic dt binding for dma channels
dma: omap-dma: Implement device_slave_caps callback
dmaengine: qcom_bam_dma: Add device tree binding
dma: dw: Add suspend and resume handling for PCI mode DW_DMAC.
dma: dw: allocate memory in two stages in probe
Add new line to test result strings produced in verbose mode
dmaengine: pch_dma: use tasklet_kill in teardown
dmaengine: at_hdmac: use tasklet_kill in teardown
dma: cppi41: start tear down only if channel is busy
usb: musb: musb_cppi41: Dont reprogram DMA if tear down is initiated
dmaengine: s3c24xx-dma: make phy->irq signed for error handling
dma: imx-dma: Add missing module owner field
dma: imx-dma: Replace printk with dev_*
dma: fsl-edma: fix static checker warning of NULL dereference
dma: Remove comment about embedding dma_slave_config into custom structs
dma: mmp_tdma: move to generic device tree binding
dma: mmp_pdma: add IRQF_SHARED when request irq
dma: edma: Fix memory leak in edma_prep_dma_cyclic()
...
We can move the handling of the DMA synchronisation control out of the
prepare functions; this can be pre-calculated when the DMA channel has
been allocated, so we don't need to duplicate this in both prepare
functions.
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Move the interrupt handling for OMAP2+ into omap-dma, rather than using
the legacy support in the platform code.
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Export the DMA register information from the SoC specific data, such
that we can access the registers directly in omap-dma.c, mapping the
register region ourselves as well.
Rather than calculating the DMA channel register in its entirety for
each access, we pre-calculate an offset base address for the allocated
DMA channel and then just use the appropriate register offset.
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Provide a function to read the CSAC/CDAC register, working around the
OMAP 3.2/3.3 erratum (which requires two reads of the register if the
first returned zero.
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>