MMC core:

- Fix SDIO IRQ bug.
  - MMC regulator improvements.
  - Fix slot-gpio card detect bug.
  - Add support for Driver Stage Register.
  - Convert the common MMC OF parser to use GPIO descriptors.
  - Convert MMC_CAP2_NO_MULTI_READ into a callback, ->multi_io_quirk().
  - Some additional minor fixes.
 
 MMC host:
  - mmci: Support Qualcomm specific DML layer for DMA.
  - dw_mmc: Use common MMC regulators.
  - dw_mmc: Add support for Rock-chips RK3288.
  - tmio: Enable runtime PM support.
  - tmio: Add support for R-Car Gen2 SoCs.
  - tmio: Several fixes and improvements.
  - omap_hsmmc: Removed Balaji from MAINTAINERS.
  - jz4740: add DMA and pre/post support.
  - sdhci: Add support for Intel Braswell.
  - sdhci: Several fixes and improvements.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQIcBAABAgAGBQJUNoFRAAoJEP4mhCVzWIwp+oQP/3a9Rs85+lKwnaDtCotCnvps
 LF2R1qiFbeTgQ4XwJvOctuX0VX3/9/XTRhXq+/txA8phlXzqL5BarbXv8WfLILJJ
 DgXDt/lTeW1NzJ9WYjrmV/rsH7qlbyIq6I+7kXVT15M86Qqx40DF0hSx/idDKDc4
 1ly4trLh0ZeqsM10AR9nu6h/ykVBblHOLSnMZXbBhtmIVshvNg+5KRQkSmwtvTKy
 /DswgxmuM1H1Z0T+qNejh4AZSCvxYPlwN06eqYzpYrGuoPH+SafJVws5o1G9z9SX
 t/A9i1QDxFtvDP0u1twEAYv0R4e3H24OPit3R8p2tgMUw683576DPYkF2A13Yzxj
 c3mYiTAPK8UfRc9kWxCRSkaI38URna1+t7hHRuT/Ha6DBlAvHpRL+wIu+/25XVh+
 vNwOmECtT9DzmL2UP+SHLQtyyy3guAFSsFP5RJzuA5wcYeLpNYobcJJCGuziLNYi
 PZ55O+2HRtd7my4A7NiXAib+CXTPs4VY0XY1tBgaWHl2sxFj/mULILaf+3zxpiWg
 Jc8rWkUMpy1nP1OXUrCRBKbgr/loghUOEM6hozggeisDwpjh7Rm5OXZRj6JdO4QT
 DLCl8NQKL8Ex33XoS45LoF2uuTfLt/E52CT0Sic4JdpwvIDTwlhxQR/Yo5gWuCnQ
 L+J+zbclHjORG5EuIUsw
 =VFRY
 -----END PGP SIGNATURE-----

Merge tag 'mmc-v3.18-1' of git://git.linaro.org/people/ulf.hansson/mmc

Pull MMC updates from Ulf Hansson:
 "MMC core:
   - Fix SDIO IRQ bug
   - MMC regulator improvements
   - Fix slot-gpio card detect bug
   - Add support for Driver Stage Register
   - Convert the common MMC OF parser to use GPIO descriptors
   - Convert MMC_CAP2_NO_MULTI_READ into a callback, ->multi_io_quirk()
   - Some additional minor fixes

  MMC host:
   - mmci: Support Qualcomm specific DML layer for DMA
   - dw_mmc: Use common MMC regulators
   - dw_mmc: Add support for Rock-chips RK3288
   - tmio: Enable runtime PM support
   - tmio: Add support for R-Car Gen2 SoCs
   - tmio: Several fixes and improvements
   - omap_hsmmc: Removed Balaji from MAINTAINERS
   - jz4740: add DMA and pre/post support
   - sdhci: Add support for Intel Braswell
   - sdhci: Several fixes and improvements"

* tag 'mmc-v3.18-1' of git://git.linaro.org/people/ulf.hansson/mmc: (119 commits)
  ARM: dts: fix MMC2 regulators for Exynos5420 Arndale Octa board
  mmc: sdhci-acpi: Fix Braswell eMMC timeout clock frequency
  mmc: sdhci-acpi: Pass HID and UID to probe_slot
  mmc: sdhci-acpi: Get UID directly from acpi_device
  mmc, sdhci, bcm-kona, LLVMLinux: Remove use of __initconst
  mmc: sdhci-pci: Fix Braswell eMMC timeout clock frequency
  mmc: sdhci: Let a driver override timeout clock frequency
  mmc: sdhci-pci: Add Bay Trail and Braswell SD card detect
  mmc: sdhci-pci: Set SDHCI_QUIRK2_STOP_WITH_TC for Intel BYT host controllers
  mmc: sdhci-acpi: Add a HID and UID for a SD Card host controller
  mmc: sdhci-acpi: Set SDHCI_QUIRK2_STOP_WITH_TC for Intel host controllers
  mmc: sdhci: Add quirk for always getting TC with stop cmd
  mmc: core: restore detect line inversion semantics
  mmc: Fix incorrect warning when setting 0 Hz via debugfs
  mmc: Fix use of wrong device in mmc_gpiod_free_cd()
  mmc: atmel-mci: fix mismatched section on atmci_cleanup_slot
  mmc: rtsx_pci: Set power related cap2 macros
  mmc: core: Add new power_mode MMC_POWER_UNDEFINED
  mmc: sdhci: execute tuning when device is not busy
  mmc: atmel-mci: Release mmc resources on failure in probe
  ..
This commit is contained in:
Linus Torvalds 2014-10-11 06:34:22 -04:00
commit f43b179bbd
81 changed files with 2006 additions and 749 deletions

View file

@ -40,6 +40,8 @@ Optional properties:
- mmc-hs200-1_2v: eMMC HS200 mode(1.2V I/O) is supported
- mmc-hs400-1_8v: eMMC HS400 mode(1.8V I/O) is supported
- mmc-hs400-1_2v: eMMC HS400 mode(1.2V I/O) is supported
- dsr: Value the card's (optional) Driver Stage Register (DSR) should be
programmed with. Valid range: [0 .. 0xffff].
*NOTE* on CD and WP polarity. To use common for all SD/MMC host controllers line
polarity properties, we have to fix the meaning of the "normal" and "inverted"

View file

@ -10,12 +10,14 @@ extensions to the Synopsys Designware Mobile Storage Host Controller.
Required Properties:
* compatible: should be
- "rockchip,rk2928-dw-mshc": for Rockchip RK2928 and following
- "rockchip,rk2928-dw-mshc": for Rockchip RK2928 and following,
before RK3288
- "rockchip,rk3288-dw-mshc": for Rockchip RK3288
Example:
rkdwmmc0@12200000 {
compatible = "rockchip,rk2928-dw-mshc";
compatible = "rockchip,rk3288-dw-mshc";
reg = <0x12200000 0x1000>;
interrupts = <0 75 0>;
#address-cells = <1>;

View file

@ -19,6 +19,9 @@ Required properties:
"renesas,sdhi-r8a7779" - SDHI IP on R8A7779 SoC
"renesas,sdhi-r8a7790" - SDHI IP on R8A7790 SoC
"renesas,sdhi-r8a7791" - SDHI IP on R8A7791 SoC
"renesas,sdhi-r8a7792" - SDHI IP on R8A7792 SoC
"renesas,sdhi-r8a7793" - SDHI IP on R8A7793 SoC
"renesas,sdhi-r8a7794" - SDHI IP on R8A7794 SoC
Optional properties:
- toshiba,mmc-wrprotect-disable: write-protect detection is unavailable

View file

@ -6616,10 +6616,9 @@ S: Maintained
F: drivers/mmc/host/omap.c
OMAP HS MMC SUPPORT
M: Balaji T K <balajitk@ti.com>
L: linux-mmc@vger.kernel.org
L: linux-omap@vger.kernel.org
S: Maintained
S: Orphan
F: drivers/mmc/host/omap_hsmmc.c
OMAP RANDOM NUMBER GENERATOR SUPPORT

View file

@ -69,7 +69,8 @@ mmc@12220000 {
samsung,dw-mshc-ddr-timing = <1 2>;
pinctrl-names = "default";
pinctrl-0 = <&sd2_clk &sd2_cmd &sd2_cd &sd2_bus4>;
vmmc-supply = <&ldo10_reg>;
vmmc-supply = <&ldo19_reg>;
vqmmc-supply = <&ldo13_reg>;
bus-width = <4>;
cap-sd-highspeed;
};

View file

@ -331,7 +331,6 @@ SDHI_REGULATOR(2, RCAR_GP_PIN(7, 19), RCAR_GP_PIN(2, 26));
static struct sh_mobile_sdhi_info sdhi0_info __initdata = {
.tmio_caps = MMC_CAP_SD_HIGHSPEED | MMC_CAP_SDIO_IRQ |
MMC_CAP_POWER_OFF_CARD,
.tmio_caps2 = MMC_CAP2_NO_MULTI_READ,
.tmio_flags = TMIO_MMC_HAS_IDLE_WAIT,
};
@ -344,7 +343,6 @@ static struct resource sdhi0_resources[] __initdata = {
static struct sh_mobile_sdhi_info sdhi1_info __initdata = {
.tmio_caps = MMC_CAP_SD_HIGHSPEED | MMC_CAP_SDIO_IRQ |
MMC_CAP_POWER_OFF_CARD,
.tmio_caps2 = MMC_CAP2_NO_MULTI_READ,
.tmio_flags = TMIO_MMC_HAS_IDLE_WAIT,
};
@ -357,7 +355,6 @@ static struct resource sdhi1_resources[] __initdata = {
static struct sh_mobile_sdhi_info sdhi2_info __initdata = {
.tmio_caps = MMC_CAP_SD_HIGHSPEED | MMC_CAP_SDIO_IRQ |
MMC_CAP_POWER_OFF_CARD,
.tmio_caps2 = MMC_CAP2_NO_MULTI_READ,
.tmio_flags = TMIO_MMC_HAS_IDLE_WAIT |
TMIO_MMC_WRPROTECT_DISABLE,
};

View file

@ -630,7 +630,6 @@ static void __init lager_add_rsnd_device(void)
static struct sh_mobile_sdhi_info sdhi0_info __initdata = {
.tmio_caps = MMC_CAP_SD_HIGHSPEED | MMC_CAP_SDIO_IRQ |
MMC_CAP_POWER_OFF_CARD,
.tmio_caps2 = MMC_CAP2_NO_MULTI_READ,
.tmio_flags = TMIO_MMC_HAS_IDLE_WAIT |
TMIO_MMC_WRPROTECT_DISABLE,
};
@ -644,7 +643,6 @@ static struct resource sdhi0_resources[] __initdata = {
static struct sh_mobile_sdhi_info sdhi2_info __initdata = {
.tmio_caps = MMC_CAP_SD_HIGHSPEED | MMC_CAP_SDIO_IRQ |
MMC_CAP_POWER_OFF_CARD,
.tmio_caps2 = MMC_CAP2_NO_MULTI_READ,
.tmio_flags = TMIO_MMC_HAS_IDLE_WAIT |
TMIO_MMC_WRPROTECT_DISABLE,
};

View file

@ -977,7 +977,7 @@ static int mmc_blk_cmd_recovery(struct mmc_card *card, struct request *req,
return ERR_CONTINUE;
/* Now for stop errors. These aren't fatal to the transfer. */
pr_err("%s: error %d sending stop command, original cmd response %#x, card status %#x\n",
pr_info("%s: error %d sending stop command, original cmd response %#x, card status %#x\n",
req->rq_disk->disk_name, brq->stop.error,
brq->cmd.resp[0], status);
@ -1398,10 +1398,15 @@ static void mmc_blk_rw_rq_prep(struct mmc_queue_req *mqrq,
if (disable_multi)
brq->data.blocks = 1;
/* Some controllers can't do multiblock reads due to hw bugs */
if (card->host->caps2 & MMC_CAP2_NO_MULTI_READ &&
rq_data_dir(req) == READ)
brq->data.blocks = 1;
/*
* Some controllers have HW issues while operating
* in multiple I/O mode
*/
if (card->host->ops->multi_io_quirk)
brq->data.blocks = card->host->ops->multi_io_quirk(card,
(rq_data_dir(req) == READ) ?
MMC_DATA_READ : MMC_DATA_WRITE,
brq->data.blocks);
}
if (brq->data.blocks > 1 || do_rel_wr) {
@ -1923,8 +1928,8 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *rqc)
case MMC_BLK_ECC_ERR:
if (brq->data.blocks > 1) {
/* Redo read one sector at a time */
pr_warning("%s: retrying using single block read\n",
req->rq_disk->disk_name);
pr_warn("%s: retrying using single block read\n",
req->rq_disk->disk_name);
disable_multi = 1;
break;
}
@ -2131,7 +2136,7 @@ static struct mmc_blk_data *mmc_blk_alloc_req(struct mmc_card *card,
md->disk->queue = md->queue.queue;
md->disk->driverfs_dev = parent;
set_disk_ro(md->disk, md->read_only || default_ro);
if (area_type & MMC_BLK_DATA_AREA_RPMB)
if (area_type & (MMC_BLK_DATA_AREA_RPMB | MMC_BLK_DATA_AREA_BOOT))
md->disk->flags |= GENHD_FL_NO_PART_SCAN;
/*

View file

@ -229,14 +229,12 @@ int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
if (bouncesz > 512) {
mqrq_cur->bounce_buf = kmalloc(bouncesz, GFP_KERNEL);
if (!mqrq_cur->bounce_buf) {
pr_warning("%s: unable to "
"allocate bounce cur buffer\n",
pr_warn("%s: unable to allocate bounce cur buffer\n",
mmc_card_name(card));
}
mqrq_prev->bounce_buf = kmalloc(bouncesz, GFP_KERNEL);
if (!mqrq_prev->bounce_buf) {
pr_warning("%s: unable to "
"allocate bounce prev buffer\n",
pr_warn("%s: unable to allocate bounce prev buffer\n",
mmc_card_name(card));
kfree(mqrq_cur->bounce_buf);
mqrq_cur->bounce_buf = NULL;

View file

@ -1063,8 +1063,8 @@ static int sdio_uart_probe(struct sdio_func *func,
return -ENOMEM;
if (func->class == SDIO_CLASS_UART) {
pr_warning("%s: need info on UART class basic setup\n",
sdio_func_id(func));
pr_warn("%s: need info on UART class basic setup\n",
sdio_func_id(func));
kfree(port);
return -ENOSYS;
} else if (func->class == SDIO_CLASS_GPS) {
@ -1082,9 +1082,8 @@ static int sdio_uart_probe(struct sdio_func *func,
break;
}
if (!tpl) {
pr_warning(
"%s: can't find tuple 0x91 subtuple 0 (SUBTPL_SIOREG) for GPS class\n",
sdio_func_id(func));
pr_warn("%s: can't find tuple 0x91 subtuple 0 (SUBTPL_SIOREG) for GPS class\n",
sdio_func_id(func));
kfree(port);
return -EINVAL;
}

View file

@ -433,8 +433,8 @@ static void mmc_wait_for_req_done(struct mmc_host *host,
*/
if (cmd->sanitize_busy && cmd->error == -ETIMEDOUT) {
if (!mmc_interrupt_hpi(host->card)) {
pr_warning("%s: %s: Interrupted sanitize\n",
mmc_hostname(host), __func__);
pr_warn("%s: %s: Interrupted sanitize\n",
mmc_hostname(host), __func__);
cmd->error = 0;
break;
} else {
@ -995,7 +995,7 @@ void mmc_set_chip_select(struct mmc_host *host, int mode)
*/
static void __mmc_set_clock(struct mmc_host *host, unsigned int hz)
{
WARN_ON(hz < host->f_min);
WARN_ON(hz && hz < host->f_min);
if (hz > host->f_max)
hz = host->f_max;
@ -1221,15 +1221,14 @@ int mmc_regulator_get_ocrmask(struct regulator *supply)
int result = 0;
int count;
int i;
int vdd_uV;
int vdd_mV;
count = regulator_count_voltages(supply);
if (count < 0)
return count;
for (i = 0; i < count; i++) {
int vdd_uV;
int vdd_mV;
vdd_uV = regulator_list_voltage(supply, i);
if (vdd_uV <= 0)
continue;
@ -1238,6 +1237,15 @@ int mmc_regulator_get_ocrmask(struct regulator *supply)
result |= mmc_vddrange_to_ocrmask(vdd_mV, vdd_mV);
}
if (!result) {
vdd_uV = regulator_get_voltage(supply);
if (vdd_uV <= 0)
return vdd_uV;
vdd_mV = vdd_uV / 1000;
result = mmc_vddrange_to_ocrmask(vdd_mV, vdd_mV);
}
return result;
}
EXPORT_SYMBOL_GPL(mmc_regulator_get_ocrmask);
@ -1263,7 +1271,6 @@ int mmc_regulator_set_ocr(struct mmc_host *mmc,
if (vdd_bit) {
int tmp;
int voltage;
/*
* REVISIT mmc_vddrange_to_ocrmask() may have set some
@ -1280,22 +1287,7 @@ int mmc_regulator_set_ocr(struct mmc_host *mmc,
max_uV = min_uV + 100 * 1000;
}
/*
* If we're using a fixed/static regulator, don't call
* regulator_set_voltage; it would fail.
*/
voltage = regulator_get_voltage(supply);
if (!regulator_can_change_voltage(supply))
min_uV = max_uV = voltage;
if (voltage < 0)
result = voltage;
else if (voltage < min_uV || voltage > max_uV)
result = regulator_set_voltage(supply, min_uV, max_uV);
else
result = 0;
result = regulator_set_voltage(supply, min_uV, max_uV);
if (result == 0 && !mmc->regulator_enabled) {
result = regulator_enable(supply);
if (!result)
@ -1425,8 +1417,8 @@ int mmc_set_signal_voltage(struct mmc_host *host, int signal_voltage, u32 ocr)
if (!host->ops->start_signal_voltage_switch)
return -EPERM;
if (!host->ops->card_busy)
pr_warning("%s: cannot verify signal voltage switch\n",
mmc_hostname(host));
pr_warn("%s: cannot verify signal voltage switch\n",
mmc_hostname(host));
cmd.opcode = SD_SWITCH_VOLTAGE;
cmd.arg = 0;
@ -1761,7 +1753,7 @@ void mmc_init_erase(struct mmc_card *card)
card->erase_shift = ffs(card->ssr.au) - 1;
} else if (card->ext_csd.hc_erase_size) {
card->pref_erase = card->ext_csd.hc_erase_size;
} else {
} else if (card->erase_size) {
sz = (card->csd.capacity << (card->csd.read_blkbits - 9)) >> 11;
if (sz < 128)
card->pref_erase = 512 * 1024 / 512;
@ -1778,7 +1770,8 @@ void mmc_init_erase(struct mmc_card *card)
if (sz)
card->pref_erase += card->erase_size - sz;
}
}
} else
card->pref_erase = 0;
}
static unsigned int mmc_mmc_erase_timeout(struct mmc_card *card,
@ -2489,6 +2482,7 @@ void mmc_start_host(struct mmc_host *host)
{
host->f_init = max(freqs[0], host->f_min);
host->rescan_disable = 0;
host->ios.power_mode = MMC_POWER_UNDEFINED;
if (host->caps2 & MMC_CAP2_NO_PRESCAN_POWERUP)
mmc_power_off(host);
else

View file

@ -310,9 +310,8 @@ int mmc_of_parse(struct mmc_host *host)
{
struct device_node *np;
u32 bus_width;
bool explicit_inv_wp, gpio_inv_wp = false;
enum of_gpio_flags flags;
int len, ret, gpio;
int len, ret;
bool cap_invert, gpio_invert;
if (!host->parent || !host->parent->of_node)
return 0;
@ -360,59 +359,62 @@ int mmc_of_parse(struct mmc_host *host)
if (of_find_property(np, "non-removable", &len)) {
host->caps |= MMC_CAP_NONREMOVABLE;
} else {
bool explicit_inv_cd, gpio_inv_cd = false;
explicit_inv_cd = of_property_read_bool(np, "cd-inverted");
if (of_property_read_bool(np, "cd-inverted"))
cap_invert = true;
else
cap_invert = false;
if (of_find_property(np, "broken-cd", &len))
host->caps |= MMC_CAP_NEEDS_POLL;
gpio = of_get_named_gpio_flags(np, "cd-gpios", 0, &flags);
if (gpio == -EPROBE_DEFER)
return gpio;
if (gpio_is_valid(gpio)) {
if (!(flags & OF_GPIO_ACTIVE_LOW))
gpio_inv_cd = true;
ret = mmc_gpio_request_cd(host, gpio, 0);
if (ret < 0) {
dev_err(host->parent,
"Failed to request CD GPIO #%d: %d!\n",
gpio, ret);
ret = mmc_gpiod_request_cd(host, "cd", 0, true,
0, &gpio_invert);
if (ret) {
if (ret == -EPROBE_DEFER)
return ret;
} else {
dev_info(host->parent, "Got CD GPIO #%d.\n",
gpio);
if (ret != -ENOENT) {
dev_err(host->parent,
"Failed to request CD GPIO: %d\n",
ret);
}
}
} else
dev_info(host->parent, "Got CD GPIO\n");
if (explicit_inv_cd ^ gpio_inv_cd)
/*
* There are two ways to flag that the CD line is inverted:
* through the cd-inverted flag and by the GPIO line itself
* being inverted from the GPIO subsystem. This is a leftover
* from the times when the GPIO subsystem did not make it
* possible to flag a line as inverted.
*
* If the capability on the host AND the GPIO line are
* both inverted, the end result is that the CD line is
* not inverted.
*/
if (cap_invert ^ gpio_invert)
host->caps2 |= MMC_CAP2_CD_ACTIVE_HIGH;
}
/* Parse Write Protection */
explicit_inv_wp = of_property_read_bool(np, "wp-inverted");
if (of_property_read_bool(np, "wp-inverted"))
cap_invert = true;
else
cap_invert = false;
gpio = of_get_named_gpio_flags(np, "wp-gpios", 0, &flags);
if (gpio == -EPROBE_DEFER) {
ret = -EPROBE_DEFER;
goto out;
}
if (gpio_is_valid(gpio)) {
if (!(flags & OF_GPIO_ACTIVE_LOW))
gpio_inv_wp = true;
ret = mmc_gpio_request_ro(host, gpio);
if (ret < 0) {
dev_err(host->parent,
"Failed to request WP GPIO: %d!\n", ret);
ret = mmc_gpiod_request_ro(host, "wp", 0, false, 0, &gpio_invert);
if (ret) {
if (ret == -EPROBE_DEFER)
goto out;
} else {
dev_info(host->parent, "Got WP GPIO #%d.\n",
gpio);
if (ret != -ENOENT) {
dev_err(host->parent,
"Failed to request WP GPIO: %d\n",
ret);
}
}
if (explicit_inv_wp ^ gpio_inv_wp)
} else
dev_info(host->parent, "Got WP GPIO\n");
/* See the comment on CD inversion above */
if (cap_invert ^ gpio_invert)
host->caps2 |= MMC_CAP2_RO_ACTIVE_HIGH;
if (of_find_property(np, "cap-sd-highspeed", &len))
@ -452,6 +454,14 @@ int mmc_of_parse(struct mmc_host *host)
if (of_find_property(np, "mmc-hs400-1_2v", &len))
host->caps2 |= MMC_CAP2_HS400_1_2V | MMC_CAP2_HS200_1_2V_SDR;
host->dsr_req = !of_property_read_u32(np, "dsr", &host->dsr);
if (host->dsr_req && (host->dsr & ~0xffff)) {
dev_err(host->parent,
"device tree specified broken value for DSR: 0x%x, ignoring\n",
host->dsr);
host->dsr_req = 0;
}
return 0;
out:

View file

@ -162,6 +162,7 @@ static int mmc_decode_csd(struct mmc_card *card)
csd->read_partial = UNSTUFF_BITS(resp, 79, 1);
csd->write_misalign = UNSTUFF_BITS(resp, 78, 1);
csd->read_misalign = UNSTUFF_BITS(resp, 77, 1);
csd->dsr_imp = UNSTUFF_BITS(resp, 76, 1);
csd->r2w_factor = UNSTUFF_BITS(resp, 26, 3);
csd->write_blkbits = UNSTUFF_BITS(resp, 22, 4);
csd->write_partial = UNSTUFF_BITS(resp, 21, 1);
@ -225,9 +226,7 @@ static int mmc_get_ext_csd(struct mmc_card *card, u8 **new_ext_csd)
"Card will be ignored.\n",
mmc_hostname(card->host));
} else {
pr_warning("%s: unable to read "
"EXT_CSD, performance might "
"suffer.\n",
pr_warn("%s: unable to read EXT_CSD, performance might suffer\n",
mmc_hostname(card->host));
err = 0;
}
@ -298,6 +297,97 @@ static void mmc_select_card_type(struct mmc_card *card)
card->mmc_avail_type = avail_type;
}
static void mmc_manage_enhanced_area(struct mmc_card *card, u8 *ext_csd)
{
u8 hc_erase_grp_sz, hc_wp_grp_sz;
/*
* Disable these attributes by default
*/
card->ext_csd.enhanced_area_offset = -EINVAL;
card->ext_csd.enhanced_area_size = -EINVAL;
/*
* Enhanced area feature support -- check whether the eMMC
* card has the Enhanced area enabled. If so, export enhanced
* area offset and size to user by adding sysfs interface.
*/
if ((ext_csd[EXT_CSD_PARTITION_SUPPORT] & 0x2) &&
(ext_csd[EXT_CSD_PARTITION_ATTRIBUTE] & 0x1)) {
if (card->ext_csd.partition_setting_completed) {
hc_erase_grp_sz =
ext_csd[EXT_CSD_HC_ERASE_GRP_SIZE];
hc_wp_grp_sz =
ext_csd[EXT_CSD_HC_WP_GRP_SIZE];
/*
* calculate the enhanced data area offset, in bytes
*/
card->ext_csd.enhanced_area_offset =
(ext_csd[139] << 24) + (ext_csd[138] << 16) +
(ext_csd[137] << 8) + ext_csd[136];
if (mmc_card_blockaddr(card))
card->ext_csd.enhanced_area_offset <<= 9;
/*
* calculate the enhanced data area size, in kilobytes
*/
card->ext_csd.enhanced_area_size =
(ext_csd[142] << 16) + (ext_csd[141] << 8) +
ext_csd[140];
card->ext_csd.enhanced_area_size *=
(size_t)(hc_erase_grp_sz * hc_wp_grp_sz);
card->ext_csd.enhanced_area_size <<= 9;
} else {
pr_warn("%s: defines enhanced area without partition setting complete\n",
mmc_hostname(card->host));
}
}
}
static void mmc_manage_gp_partitions(struct mmc_card *card, u8 *ext_csd)
{
int idx;
u8 hc_erase_grp_sz, hc_wp_grp_sz;
unsigned int part_size;
/*
* General purpose partition feature support --
* If ext_csd has the size of general purpose partitions,
* set size, part_cfg, partition name in mmc_part.
*/
if (ext_csd[EXT_CSD_PARTITION_SUPPORT] &
EXT_CSD_PART_SUPPORT_PART_EN) {
hc_erase_grp_sz =
ext_csd[EXT_CSD_HC_ERASE_GRP_SIZE];
hc_wp_grp_sz =
ext_csd[EXT_CSD_HC_WP_GRP_SIZE];
for (idx = 0; idx < MMC_NUM_GP_PARTITION; idx++) {
if (!ext_csd[EXT_CSD_GP_SIZE_MULT + idx * 3] &&
!ext_csd[EXT_CSD_GP_SIZE_MULT + idx * 3 + 1] &&
!ext_csd[EXT_CSD_GP_SIZE_MULT + idx * 3 + 2])
continue;
if (card->ext_csd.partition_setting_completed == 0) {
pr_warn("%s: has partition size defined without partition complete\n",
mmc_hostname(card->host));
break;
}
part_size =
(ext_csd[EXT_CSD_GP_SIZE_MULT + idx * 3 + 2]
<< 16) +
(ext_csd[EXT_CSD_GP_SIZE_MULT + idx * 3 + 1]
<< 8) +
ext_csd[EXT_CSD_GP_SIZE_MULT + idx * 3];
part_size *= (size_t)(hc_erase_grp_sz *
hc_wp_grp_sz);
mmc_part_add(card, part_size << 19,
EXT_CSD_PART_CONFIG_ACC_GP0 + idx,
"gp%d", idx, false,
MMC_BLK_DATA_AREA_GP);
}
}
}
/*
* Decode extended CSD.
*/
@ -305,7 +395,6 @@ static int mmc_read_ext_csd(struct mmc_card *card, u8 *ext_csd)
{
int err = 0, idx;
unsigned int part_size;
u8 hc_erase_grp_sz = 0, hc_wp_grp_sz = 0;
BUG_ON(!card);
@ -402,80 +491,16 @@ static int mmc_read_ext_csd(struct mmc_card *card, u8 *ext_csd)
ext_csd[EXT_CSD_TRIM_MULT];
card->ext_csd.raw_partition_support = ext_csd[EXT_CSD_PARTITION_SUPPORT];
if (card->ext_csd.rev >= 4) {
/*
* Enhanced area feature support -- check whether the eMMC
* card has the Enhanced area enabled. If so, export enhanced
* area offset and size to user by adding sysfs interface.
*/
if ((ext_csd[EXT_CSD_PARTITION_SUPPORT] & 0x2) &&
(ext_csd[EXT_CSD_PARTITION_ATTRIBUTE] & 0x1)) {
hc_erase_grp_sz =
ext_csd[EXT_CSD_HC_ERASE_GRP_SIZE];
hc_wp_grp_sz =
ext_csd[EXT_CSD_HC_WP_GRP_SIZE];
if (ext_csd[EXT_CSD_PARTITION_SETTING_COMPLETED] &
EXT_CSD_PART_SETTING_COMPLETED)
card->ext_csd.partition_setting_completed = 1;
else
card->ext_csd.partition_setting_completed = 0;
card->ext_csd.enhanced_area_en = 1;
/*
* calculate the enhanced data area offset, in bytes
*/
card->ext_csd.enhanced_area_offset =
(ext_csd[139] << 24) + (ext_csd[138] << 16) +
(ext_csd[137] << 8) + ext_csd[136];
if (mmc_card_blockaddr(card))
card->ext_csd.enhanced_area_offset <<= 9;
/*
* calculate the enhanced data area size, in kilobytes
*/
card->ext_csd.enhanced_area_size =
(ext_csd[142] << 16) + (ext_csd[141] << 8) +
ext_csd[140];
card->ext_csd.enhanced_area_size *=
(size_t)(hc_erase_grp_sz * hc_wp_grp_sz);
card->ext_csd.enhanced_area_size <<= 9;
} else {
/*
* If the enhanced area is not enabled, disable these
* device attributes.
*/
card->ext_csd.enhanced_area_offset = -EINVAL;
card->ext_csd.enhanced_area_size = -EINVAL;
}
mmc_manage_enhanced_area(card, ext_csd);
/*
* General purpose partition feature support --
* If ext_csd has the size of general purpose partitions,
* set size, part_cfg, partition name in mmc_part.
*/
if (ext_csd[EXT_CSD_PARTITION_SUPPORT] &
EXT_CSD_PART_SUPPORT_PART_EN) {
if (card->ext_csd.enhanced_area_en != 1) {
hc_erase_grp_sz =
ext_csd[EXT_CSD_HC_ERASE_GRP_SIZE];
hc_wp_grp_sz =
ext_csd[EXT_CSD_HC_WP_GRP_SIZE];
mmc_manage_gp_partitions(card, ext_csd);
card->ext_csd.enhanced_area_en = 1;
}
for (idx = 0; idx < MMC_NUM_GP_PARTITION; idx++) {
if (!ext_csd[EXT_CSD_GP_SIZE_MULT + idx * 3] &&
!ext_csd[EXT_CSD_GP_SIZE_MULT + idx * 3 + 1] &&
!ext_csd[EXT_CSD_GP_SIZE_MULT + idx * 3 + 2])
continue;
part_size =
(ext_csd[EXT_CSD_GP_SIZE_MULT + idx * 3 + 2]
<< 16) +
(ext_csd[EXT_CSD_GP_SIZE_MULT + idx * 3 + 1]
<< 8) +
ext_csd[EXT_CSD_GP_SIZE_MULT + idx * 3];
part_size *= (size_t)(hc_erase_grp_sz *
hc_wp_grp_sz);
mmc_part_add(card, part_size << 19,
EXT_CSD_PART_CONFIG_ACC_GP0 + idx,
"gp%d", idx, false,
MMC_BLK_DATA_AREA_GP);
}
}
card->ext_csd.sec_trim_mult =
ext_csd[EXT_CSD_SEC_TRIM_MULT];
card->ext_csd.sec_erase_mult =
@ -789,8 +814,8 @@ static int __mmc_select_powerclass(struct mmc_card *card,
ext_csd->raw_pwr_cl_200_360;
break;
default:
pr_warning("%s: Voltage range not supported "
"for power class.\n", mmc_hostname(host));
pr_warn("%s: Voltage range not supported for power class\n",
mmc_hostname(host));
return -EINVAL;
}
@ -987,19 +1012,35 @@ static int mmc_select_hs_ddr(struct mmc_card *card)
* 1.8V vccq at 3.3V core voltage (vcc) is not required
* in the JEDEC spec for DDR.
*
* Do not force change in vccq since we are obviously
* working and no change to vccq is needed.
* Even (e)MMC card can support 3.3v to 1.2v vccq, but not all
* host controller can support this, like some of the SDHCI
* controller which connect to an eMMC device. Some of these
* host controller still needs to use 1.8v vccq for supporting
* DDR mode.
*
* So the sequence will be:
* if (host and device can both support 1.2v IO)
* use 1.2v IO;
* else if (host and device can both support 1.8v IO)
* use 1.8v IO;
* so if host and device can only support 3.3v IO, this is the
* last choice.
*
* WARNING: eMMC rules are NOT the same as SD DDR
*/
if (card->mmc_avail_type & EXT_CSD_CARD_TYPE_DDR_1_2V) {
err = __mmc_set_signal_voltage(host,
MMC_SIGNAL_VOLTAGE_120);
if (err)
return err;
}
err = -EINVAL;
if (card->mmc_avail_type & EXT_CSD_CARD_TYPE_DDR_1_2V)
err = __mmc_set_signal_voltage(host, MMC_SIGNAL_VOLTAGE_120);
mmc_set_timing(host, MMC_TIMING_MMC_DDR52);
if (err && (card->mmc_avail_type & EXT_CSD_CARD_TYPE_DDR_1_8V))
err = __mmc_set_signal_voltage(host, MMC_SIGNAL_VOLTAGE_180);
/* make sure vccq is 3.3v after switching disaster */
if (err)
err = __mmc_set_signal_voltage(host, MMC_SIGNAL_VOLTAGE_330);
if (!err)
mmc_set_timing(host, MMC_TIMING_MMC_DDR52);
return err;
}
@ -1134,6 +1175,38 @@ static int mmc_select_timing(struct mmc_card *card)
return err;
}
const u8 tuning_blk_pattern_4bit[MMC_TUNING_BLK_PATTERN_4BIT_SIZE] = {
0xff, 0x0f, 0xff, 0x00, 0xff, 0xcc, 0xc3, 0xcc,
0xc3, 0x3c, 0xcc, 0xff, 0xfe, 0xff, 0xfe, 0xef,
0xff, 0xdf, 0xff, 0xdd, 0xff, 0xfb, 0xff, 0xfb,
0xbf, 0xff, 0x7f, 0xff, 0x77, 0xf7, 0xbd, 0xef,
0xff, 0xf0, 0xff, 0xf0, 0x0f, 0xfc, 0xcc, 0x3c,
0xcc, 0x33, 0xcc, 0xcf, 0xff, 0xef, 0xff, 0xee,
0xff, 0xfd, 0xff, 0xfd, 0xdf, 0xff, 0xbf, 0xff,
0xbb, 0xff, 0xf7, 0xff, 0xf7, 0x7f, 0x7b, 0xde,
};
EXPORT_SYMBOL(tuning_blk_pattern_4bit);
const u8 tuning_blk_pattern_8bit[MMC_TUNING_BLK_PATTERN_8BIT_SIZE] = {
0xff, 0xff, 0x00, 0xff, 0xff, 0xff, 0x00, 0x00,
0xff, 0xff, 0xcc, 0xcc, 0xcc, 0x33, 0xcc, 0xcc,
0xcc, 0x33, 0x33, 0xcc, 0xcc, 0xcc, 0xff, 0xff,
0xff, 0xee, 0xff, 0xff, 0xff, 0xee, 0xee, 0xff,
0xff, 0xff, 0xdd, 0xff, 0xff, 0xff, 0xdd, 0xdd,
0xff, 0xff, 0xff, 0xbb, 0xff, 0xff, 0xff, 0xbb,
0xbb, 0xff, 0xff, 0xff, 0x77, 0xff, 0xff, 0xff,
0x77, 0x77, 0xff, 0x77, 0xbb, 0xdd, 0xee, 0xff,
0xff, 0xff, 0xff, 0x00, 0xff, 0xff, 0xff, 0x00,
0x00, 0xff, 0xff, 0xcc, 0xcc, 0xcc, 0x33, 0xcc,
0xcc, 0xcc, 0x33, 0x33, 0xcc, 0xcc, 0xcc, 0xff,
0xff, 0xff, 0xee, 0xff, 0xff, 0xff, 0xee, 0xee,
0xff, 0xff, 0xff, 0xdd, 0xff, 0xff, 0xff, 0xdd,
0xdd, 0xff, 0xff, 0xff, 0xbb, 0xff, 0xff, 0xff,
0xbb, 0xbb, 0xff, 0xff, 0xff, 0x77, 0xff, 0xff,
0xff, 0x77, 0x77, 0xff, 0x77, 0xbb, 0xdd, 0xee,
};
EXPORT_SYMBOL(tuning_blk_pattern_8bit);
/*
* Execute tuning sequence to seek the proper bus operating
* conditions for HS200 and HS400, which sends CMD21 to the device.
@ -1271,6 +1344,13 @@ static int mmc_init_card(struct mmc_host *host, u32 ocr,
goto free_card;
}
/*
* handling only for cards supporting DSR and hosts requesting
* DSR configuration
*/
if (card->csd.dsr_imp && host->dsr_req)
mmc_set_dsr(host);
/*
* Select card, as all following commands rely on that.
*/
@ -1308,7 +1388,7 @@ static int mmc_init_card(struct mmc_host *host, u32 ocr,
* If enhanced_area_en is TRUE, host needs to enable ERASE_GRP_DEF
* bit. This bit will be lost every time after a reset or power off.
*/
if (card->ext_csd.enhanced_area_en ||
if (card->ext_csd.partition_setting_completed ||
(card->ext_csd.rev >= 3 && (host->caps2 & MMC_CAP2_HC_ERASE_SZ))) {
err = mmc_switch(card, EXT_CSD_CMD_SET_NORMAL,
EXT_CSD_ERASE_GROUP_DEF, 1,
@ -1408,8 +1488,8 @@ static int mmc_init_card(struct mmc_host *host, u32 ocr,
if (err && err != -EBADMSG)
goto free_card;
if (err) {
pr_warning("%s: Enabling HPI failed\n",
mmc_hostname(card->host));
pr_warn("%s: Enabling HPI failed\n",
mmc_hostname(card->host));
err = 0;
} else
card->ext_csd.hpi_en = 1;
@ -1430,9 +1510,8 @@ static int mmc_init_card(struct mmc_host *host, u32 ocr,
* Only if no error, cache is turned on successfully.
*/
if (err) {
pr_warning("%s: Cache is supported, "
"but failed to turn on (%d)\n",
mmc_hostname(card->host), err);
pr_warn("%s: Cache is supported, but failed to turn on (%d)\n",
mmc_hostname(card->host), err);
card->ext_csd.cache_ctrl = 0;
err = 0;
} else {

View file

@ -93,6 +93,26 @@ int mmc_deselect_cards(struct mmc_host *host)
return _mmc_select_card(host, NULL);
}
/*
* Write the value specified in the device tree or board code into the optional
* 16 bit Driver Stage Register. This can be used to tune raise/fall times and
* drive strength of the DAT and CMD outputs. The actual meaning of a given
* value is hardware dependant.
* The presence of the DSR register can be determined from the CSD register,
* bit 76.
*/
int mmc_set_dsr(struct mmc_host *host)
{
struct mmc_command cmd = {0};
cmd.opcode = MMC_SET_DSR;
cmd.arg = (host->dsr << 16) | 0xffff;
cmd.flags = MMC_RSP_NONE | MMC_CMD_AC;
return mmc_wait_for_cmd(host, &cmd, MMC_CMD_RETRIES);
}
int mmc_go_idle(struct mmc_host *host)
{
int err;
@ -629,8 +649,8 @@ int mmc_send_hpi_cmd(struct mmc_card *card, u32 *status)
int err;
if (!card->ext_csd.hpi) {
pr_warning("%s: Card didn't support HPI command\n",
mmc_hostname(card->host));
pr_warn("%s: Card didn't support HPI command\n",
mmc_hostname(card->host));
return -EINVAL;
}

View file

@ -14,6 +14,7 @@
int mmc_select_card(struct mmc_card *card);
int mmc_deselect_cards(struct mmc_host *host);
int mmc_set_dsr(struct mmc_host *host);
int mmc_go_idle(struct mmc_host *host);
int mmc_send_op_cond(struct mmc_host *host, u32 ocr, u32 *rocr);
int mmc_all_send_cid(struct mmc_host *host, u32 *cid);

View file

@ -127,6 +127,7 @@ static int mmc_decode_csd(struct mmc_card *card)
csd->read_partial = UNSTUFF_BITS(resp, 79, 1);
csd->write_misalign = UNSTUFF_BITS(resp, 78, 1);
csd->read_misalign = UNSTUFF_BITS(resp, 77, 1);
csd->dsr_imp = UNSTUFF_BITS(resp, 76, 1);
csd->r2w_factor = UNSTUFF_BITS(resp, 26, 3);
csd->write_blkbits = UNSTUFF_BITS(resp, 22, 4);
csd->write_partial = UNSTUFF_BITS(resp, 21, 1);
@ -228,8 +229,8 @@ static int mmc_read_ssr(struct mmc_card *card)
u32 *ssr;
if (!(card->csd.cmdclass & CCC_APP_SPEC)) {
pr_warning("%s: card lacks mandatory SD Status "
"function.\n", mmc_hostname(card->host));
pr_warn("%s: card lacks mandatory SD Status function\n",
mmc_hostname(card->host));
return 0;
}
@ -239,8 +240,8 @@ static int mmc_read_ssr(struct mmc_card *card)
err = mmc_app_sd_status(card, ssr);
if (err) {
pr_warning("%s: problem reading SD Status "
"register.\n", mmc_hostname(card->host));
pr_warn("%s: problem reading SD Status register\n",
mmc_hostname(card->host));
err = 0;
goto out;
}
@ -264,8 +265,8 @@ static int mmc_read_ssr(struct mmc_card *card)
card->ssr.erase_offset = eo * 1000;
}
} else {
pr_warning("%s: SD Status: Invalid Allocation Unit size.\n",
mmc_hostname(card->host));
pr_warn("%s: SD Status: Invalid Allocation Unit size\n",
mmc_hostname(card->host));
}
}
out:
@ -285,8 +286,7 @@ static int mmc_read_switch(struct mmc_card *card)
return 0;
if (!(card->csd.cmdclass & CCC_SWITCH)) {
pr_warning("%s: card lacks mandatory switch "
"function, performance might suffer.\n",
pr_warn("%s: card lacks mandatory switch function, performance might suffer\n",
mmc_hostname(card->host));
return 0;
}
@ -315,7 +315,7 @@ static int mmc_read_switch(struct mmc_card *card)
if (err != -EINVAL && err != -ENOSYS && err != -EFAULT)
goto out;
pr_warning("%s: problem reading Bus Speed modes.\n",
pr_warn("%s: problem reading Bus Speed modes\n",
mmc_hostname(card->host));
err = 0;
@ -371,8 +371,7 @@ int mmc_sd_switch_hs(struct mmc_card *card)
goto out;
if ((status[16] & 0xF) != 1) {
pr_warning("%s: Problem switching card "
"into high-speed mode!\n",
pr_warn("%s: Problem switching card into high-speed mode!\n",
mmc_hostname(card->host));
err = 0;
} else {
@ -439,7 +438,7 @@ static int sd_select_driver_type(struct mmc_card *card, u8 *status)
return err;
if ((status[15] & 0xF) != drive_strength) {
pr_warning("%s: Problem setting drive strength!\n",
pr_warn("%s: Problem setting drive strength!\n",
mmc_hostname(card->host));
return 0;
}
@ -517,7 +516,7 @@ static int sd_set_bus_speed_mode(struct mmc_card *card, u8 *status)
return err;
if ((status[16] & 0xF) != card->sd_bus_speed)
pr_warning("%s: Problem setting bus speed mode!\n",
pr_warn("%s: Problem setting bus speed mode!\n",
mmc_hostname(card->host));
else {
mmc_set_timing(card->host, timing);
@ -597,7 +596,7 @@ static int sd_set_current_limit(struct mmc_card *card, u8 *status)
return err;
if (((status[15] >> 4) & 0x0F) != current_limit)
pr_warning("%s: Problem setting current limit!\n",
pr_warn("%s: Problem setting current limit!\n",
mmc_hostname(card->host));
}
@ -726,8 +725,7 @@ int mmc_sd_get_cid(struct mmc_host *host, u32 ocr, u32 *cid, u32 *rocr)
try_again:
if (!retries) {
ocr &= ~SD_OCR_S18R;
pr_warning("%s: Skipping voltage switch\n",
mmc_hostname(host));
pr_warn("%s: Skipping voltage switch\n", mmc_hostname(host));
}
/*
@ -871,9 +869,7 @@ int mmc_sd_setup_card(struct mmc_host *host, struct mmc_card *card,
}
if (ro < 0) {
pr_warning("%s: host does not "
"support reading read-only "
"switch. assuming write-enable.\n",
pr_warn("%s: host does not support reading read-only switch, assuming write-enable\n",
mmc_hostname(host));
} else if (ro > 0) {
mmc_card_set_readonly(card);
@ -953,6 +949,13 @@ static int mmc_sd_init_card(struct mmc_host *host, u32 ocr,
mmc_decode_cid(card);
}
/*
* handling only for cards supporting DSR and hosts requesting
* DSR configuration
*/
if (card->csd.dsr_imp && host->dsr_req)
mmc_set_dsr(host);
/*
* Select card, as all following commands rely on that.
*/

View file

@ -216,8 +216,8 @@ static int sdio_enable_wide(struct mmc_card *card)
return ret;
if ((ctrl & SDIO_BUS_WIDTH_MASK) == SDIO_BUS_WIDTH_RESERVED)
pr_warning("%s: SDIO_CCCR_IF is invalid: 0x%02x\n",
mmc_hostname(card->host), ctrl);
pr_warn("%s: SDIO_CCCR_IF is invalid: 0x%02x\n",
mmc_hostname(card->host), ctrl);
/* set as 4-bit bus width */
ctrl &= ~SDIO_BUS_WIDTH_MASK;
@ -605,8 +605,7 @@ static int mmc_sdio_init_card(struct mmc_host *host, u32 ocr,
try_again:
if (!retries) {
pr_warning("%s: Skipping voltage switch\n",
mmc_hostname(host));
pr_warn("%s: Skipping voltage switch\n", mmc_hostname(host));
ocr &= ~R4_18V_PRESENT;
}
@ -992,8 +991,16 @@ static int mmc_sdio_resume(struct mmc_host *host)
}
}
if (!err && host->sdio_irqs)
wake_up_process(host->sdio_irq_thread);
if (!err && host->sdio_irqs) {
if (!(host->caps2 & MMC_CAP2_SDIO_IRQ_NOTHREAD)) {
wake_up_process(host->sdio_irq_thread);
} else if (host->caps & MMC_CAP_SDIO_IRQ) {
mmc_host_clk_hold(host);
host->ops->enable_sdio_irq(host, 1);
mmc_host_clk_release(host);
}
}
mmc_release_host(host);
host->pm_flags &= ~MMC_PM_KEEP_POWER;

View file

@ -178,8 +178,8 @@ static int sdio_bus_remove(struct device *dev)
drv->remove(func);
if (func->irq_handler) {
pr_warning("WARNING: driver %s did not remove "
"its interrupt handler!\n", drv->name);
pr_warn("WARNING: driver %s did not remove its interrupt handler!\n",
drv->name);
sdio_claim_host(func);
sdio_release_irq(func);
sdio_release_host(func);

View file

@ -69,16 +69,15 @@ static int process_sdio_pending_irqs(struct mmc_host *host)
if (pending & (1 << i)) {
func = card->sdio_func[i - 1];
if (!func) {
pr_warning("%s: pending IRQ for "
"non-existent function\n",
pr_warn("%s: pending IRQ for non-existent function\n",
mmc_card_id(card));
ret = -EINVAL;
} else if (func->irq_handler) {
func->irq_handler(func);
count++;
} else {
pr_warning("%s: pending IRQ with no handler\n",
sdio_func_id(func));
pr_warn("%s: pending IRQ with no handler\n",
sdio_func_id(func));
ret = -EINVAL;
}
}
@ -208,7 +207,7 @@ static int sdio_card_irq_get(struct mmc_card *card)
host->sdio_irqs--;
return err;
}
} else {
} else if (host->caps & MMC_CAP_SDIO_IRQ) {
mmc_host_clk_hold(host);
host->ops->enable_sdio_irq(host, 1);
mmc_host_clk_release(host);
@ -229,7 +228,7 @@ static int sdio_card_irq_put(struct mmc_card *card)
if (!(host->caps2 & MMC_CAP2_SDIO_IRQ_NOTHREAD)) {
atomic_set(&host->sdio_irq_thread_abort, 1);
kthread_stop(host->sdio_irq_thread);
} else {
} else if (host->caps & MMC_CAP_SDIO_IRQ) {
mmc_host_clk_hold(host);
host->ops->enable_sdio_irq(host, 0);
mmc_host_clk_release(host);

View file

@ -221,8 +221,6 @@ int mmc_gpio_request_cd(struct mmc_host *host, unsigned int gpio,
ctx->override_cd_active_level = true;
ctx->cd_gpio = gpio_to_desc(gpio);
mmc_gpiod_request_cd_irq(host);
return 0;
}
EXPORT_SYMBOL(mmc_gpio_request_cd);
@ -283,6 +281,8 @@ EXPORT_SYMBOL(mmc_gpio_free_cd);
* @idx: index of the GPIO to obtain in the consumer
* @override_active_level: ignore %GPIO_ACTIVE_LOW flag
* @debounce: debounce time in microseconds
* @gpio_invert: will return whether the GPIO line is inverted or not, set
* to NULL to ignore
*
* Use this function in place of mmc_gpio_request_cd() to use the GPIO
* descriptor API. Note that it is paired with mmc_gpiod_free_cd() not
@ -293,7 +293,7 @@ EXPORT_SYMBOL(mmc_gpio_free_cd);
*/
int mmc_gpiod_request_cd(struct mmc_host *host, const char *con_id,
unsigned int idx, bool override_active_level,
unsigned int debounce)
unsigned int debounce, bool *gpio_invert)
{
struct mmc_gpio *ctx;
struct gpio_desc *desc;
@ -308,20 +308,19 @@ int mmc_gpiod_request_cd(struct mmc_host *host, const char *con_id,
if (!con_id)
con_id = ctx->cd_label;
desc = devm_gpiod_get_index(host->parent, con_id, idx);
desc = devm_gpiod_get_index(host->parent, con_id, idx, GPIOD_IN);
if (IS_ERR(desc))
return PTR_ERR(desc);
ret = gpiod_direction_input(desc);
if (ret < 0)
return ret;
if (debounce) {
ret = gpiod_set_debounce(desc, debounce);
if (ret < 0)
return ret;
}
if (gpio_invert)
*gpio_invert = !gpiod_is_active_low(desc);
ctx->override_cd_active_level = override_active_level;
ctx->cd_gpio = desc;
@ -329,6 +328,59 @@ int mmc_gpiod_request_cd(struct mmc_host *host, const char *con_id,
}
EXPORT_SYMBOL(mmc_gpiod_request_cd);
/**
* mmc_gpiod_request_ro - request a gpio descriptor for write protection
* @host: mmc host
* @con_id: function within the GPIO consumer
* @idx: index of the GPIO to obtain in the consumer
* @override_active_level: ignore %GPIO_ACTIVE_LOW flag
* @debounce: debounce time in microseconds
* @gpio_invert: will return whether the GPIO line is inverted or not,
* set to NULL to ignore
*
* Use this function in place of mmc_gpio_request_ro() to use the GPIO
* descriptor API. Note that it is paired with mmc_gpiod_free_ro() not
* mmc_gpio_free_ro().
*
* Returns zero on success, else an error.
*/
int mmc_gpiod_request_ro(struct mmc_host *host, const char *con_id,
unsigned int idx, bool override_active_level,
unsigned int debounce, bool *gpio_invert)
{
struct mmc_gpio *ctx;
struct gpio_desc *desc;
int ret;
ret = mmc_gpio_alloc(host);
if (ret < 0)
return ret;
ctx = host->slot.handler_priv;
if (!con_id)
con_id = ctx->ro_label;
desc = devm_gpiod_get_index(host->parent, con_id, idx, GPIOD_IN);
if (IS_ERR(desc))
return PTR_ERR(desc);
if (debounce) {
ret = gpiod_set_debounce(desc, debounce);
if (ret < 0)
return ret;
}
if (gpio_invert)
*gpio_invert = !gpiod_is_active_low(desc);
ctx->override_ro_active_level = override_active_level;
ctx->ro_gpio = desc;
return 0;
}
EXPORT_SYMBOL(mmc_gpiod_request_ro);
/**
* mmc_gpiod_free_cd - free the card-detection gpio descriptor
* @host: mmc host
@ -348,7 +400,7 @@ void mmc_gpiod_free_cd(struct mmc_host *host)
host->slot.cd_irq = -EINVAL;
}
devm_gpiod_put(&host->class_dev, ctx->cd_gpio);
devm_gpiod_put(host->parent, ctx->cd_gpio);
ctx->cd_gpio = NULL;
}

View file

@ -14,6 +14,17 @@ config MMC_ARMMMCI
If unsure, say N.
config MMC_QCOM_DML
tristate "Qualcomm Data Mover for SD Card Controller"
depends on MMC_ARMMMCI && QCOM_BAM_DMA
default y
help
This selects the Qualcomm Data Mover lite/local on SD Card controller.
This option will enable the dma to work correctly, if you are using
Qcom SOCs and MMC, you would probably need this option to get DMA working.
if unsure, say N.
config MMC_PXA
tristate "Intel PXA25x/26x/27x Multimedia Card Interface support"
depends on ARCH_PXA
@ -568,7 +579,8 @@ config SDH_BFIN_MISSING_CMD_PULLUP_WORKAROUND
config MMC_DW
tristate "Synopsys DesignWare Memory Card Interface"
depends on ARC || ARM
depends on HAS_DMA
depends on ARC || ARM || MIPS || COMPILE_TEST
help
This selects support for the Synopsys DesignWare Mobile Storage IP
block, this provides host support for SD and MMC interfaces, in both
@ -626,6 +638,15 @@ config MMC_DW_PCI
If unsure, say N.
config MMC_DW_ROCKCHIP
tristate "Rockchip specific extensions for Synopsys DW Memory Card Interface"
depends on MMC_DW && ARCH_ROCKCHIP
select MMC_DW_PLTFM
help
This selects support for Rockchip SoC specific extensions to the
Synopsys DesignWare Memory Card Interface driver. Select this option
for platforms based on RK3066, RK3188 and RK3288 SoC's.
config MMC_SH_MMCIF
tristate "SuperH Internal MMCIF support"
depends on MMC_BLOCK && HAS_DMA

View file

@ -3,6 +3,7 @@
#
obj-$(CONFIG_MMC_ARMMMCI) += mmci.o
obj-$(CONFIG_MMC_QCOM_DML) += mmci_qcom_dml.o
obj-$(CONFIG_MMC_PXA) += pxamci.o
obj-$(CONFIG_MMC_MXC) += mxcmmc.o
obj-$(CONFIG_MMC_MXS) += mxs-mmc.o
@ -45,6 +46,7 @@ obj-$(CONFIG_MMC_DW_PLTFM) += dw_mmc-pltfm.o
obj-$(CONFIG_MMC_DW_EXYNOS) += dw_mmc-exynos.o
obj-$(CONFIG_MMC_DW_K3) += dw_mmc-k3.o
obj-$(CONFIG_MMC_DW_PCI) += dw_mmc-pci.o
obj-$(CONFIG_MMC_DW_ROCKCHIP) += dw_mmc-rockchip.o
obj-$(CONFIG_MMC_SH_MMCIF) += sh_mmcif.o
obj-$(CONFIG_MMC_JZ4740) += jz4740_mmc.o
obj-$(CONFIG_MMC_VUB300) += vub300.o

View file

@ -17,6 +17,7 @@
#include <linux/gpio.h>
#include <linux/init.h>
#include <linux/interrupt.h>
#include <linux/io.h>
#include <linux/ioport.h>
#include <linux/module.h>
#include <linux/of.h>
@ -2195,7 +2196,8 @@ static int __init atmci_init_slot(struct atmel_mci *host,
/* Assume card is present initially */
set_bit(ATMCI_CARD_PRESENT, &slot->flags);
if (gpio_is_valid(slot->detect_pin)) {
if (gpio_request(slot->detect_pin, "mmc_detect")) {
if (devm_gpio_request(&host->pdev->dev, slot->detect_pin,
"mmc_detect")) {
dev_dbg(&mmc->class_dev, "no detect pin available\n");
slot->detect_pin = -EBUSY;
} else if (gpio_get_value(slot->detect_pin) ^
@ -2208,7 +2210,8 @@ static int __init atmci_init_slot(struct atmel_mci *host,
mmc->caps |= MMC_CAP_NEEDS_POLL;
if (gpio_is_valid(slot->wp_pin)) {
if (gpio_request(slot->wp_pin, "mmc_wp")) {
if (devm_gpio_request(&host->pdev->dev, slot->wp_pin,
"mmc_wp")) {
dev_dbg(&mmc->class_dev, "no WP pin available\n");
slot->wp_pin = -EBUSY;
}
@ -2232,7 +2235,6 @@ static int __init atmci_init_slot(struct atmel_mci *host,
dev_dbg(&mmc->class_dev,
"could not request IRQ %d for detect pin\n",
gpio_to_irq(slot->detect_pin));
gpio_free(slot->detect_pin);
slot->detect_pin = -EBUSY;
}
}
@ -2242,7 +2244,7 @@ static int __init atmci_init_slot(struct atmel_mci *host,
return 0;
}
static void __exit atmci_cleanup_slot(struct atmel_mci_slot *slot,
static void atmci_cleanup_slot(struct atmel_mci_slot *slot,
unsigned int id)
{
/* Debugfs stuff is cleaned up by mmc core */
@ -2257,10 +2259,7 @@ static void __exit atmci_cleanup_slot(struct atmel_mci_slot *slot,
free_irq(gpio_to_irq(pin), slot);
del_timer_sync(&slot->detect_timer);
gpio_free(pin);
}
if (gpio_is_valid(slot->wp_pin))
gpio_free(slot->wp_pin);
slot->host->slot[id] = NULL;
mmc_free_host(slot->mmc);
@ -2344,6 +2343,7 @@ static void __init atmci_get_cap(struct atmel_mci *host)
/* keep only major version number */
switch (version & 0xf00) {
case 0x600:
case 0x500:
host->caps.has_odd_clk_div = 1;
case 0x400:
@ -2377,7 +2377,7 @@ static int __init atmci_probe(struct platform_device *pdev)
struct resource *regs;
unsigned int nr_slots;
int irq;
int ret;
int ret, i;
regs = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (!regs)
@ -2395,7 +2395,7 @@ static int __init atmci_probe(struct platform_device *pdev)
if (irq < 0)
return irq;
host = kzalloc(sizeof(struct atmel_mci), GFP_KERNEL);
host = devm_kzalloc(&pdev->dev, sizeof(*host), GFP_KERNEL);
if (!host)
return -ENOMEM;
@ -2403,20 +2403,18 @@ static int __init atmci_probe(struct platform_device *pdev)
spin_lock_init(&host->lock);
INIT_LIST_HEAD(&host->queue);
host->mck = clk_get(&pdev->dev, "mci_clk");
if (IS_ERR(host->mck)) {
ret = PTR_ERR(host->mck);
goto err_clk_get;
}
host->mck = devm_clk_get(&pdev->dev, "mci_clk");
if (IS_ERR(host->mck))
return PTR_ERR(host->mck);
ret = -ENOMEM;
host->regs = ioremap(regs->start, resource_size(regs));
host->regs = devm_ioremap(&pdev->dev, regs->start, resource_size(regs));
if (!host->regs)
goto err_ioremap;
return -ENOMEM;
ret = clk_prepare_enable(host->mck);
if (ret)
goto err_request_irq;
return ret;
atmci_writel(host, ATMCI_CR, ATMCI_CR_SWRST);
host->bus_hz = clk_get_rate(host->mck);
clk_disable_unprepare(host->mck);
@ -2427,7 +2425,7 @@ static int __init atmci_probe(struct platform_device *pdev)
ret = request_irq(irq, atmci_interrupt, 0, dev_name(&pdev->dev), host);
if (ret)
goto err_request_irq;
return ret;
/* Get MCI capabilities and set operations according to it */
atmci_get_cap(host);
@ -2485,7 +2483,7 @@ static int __init atmci_probe(struct platform_device *pdev)
if (!host->buffer) {
ret = -ENOMEM;
dev_err(&pdev->dev, "buffer allocation failed\n");
goto err_init_slot;
goto err_dma_alloc;
}
}
@ -2495,16 +2493,16 @@ static int __init atmci_probe(struct platform_device *pdev)
return 0;
err_dma_alloc:
for (i = 0; i < ATMCI_MAX_NR_SLOTS; i++) {
if (host->slot[i])
atmci_cleanup_slot(host->slot[i], i);
}
err_init_slot:
del_timer_sync(&host->timer);
if (host->dma.chan)
dma_release_channel(host->dma.chan);
free_irq(irq, host);
err_request_irq:
iounmap(host->regs);
err_ioremap:
clk_put(host->mck);
err_clk_get:
kfree(host);
return ret;
}
@ -2528,14 +2526,11 @@ static int __exit atmci_remove(struct platform_device *pdev)
atmci_readl(host, ATMCI_SR);
clk_disable_unprepare(host->mck);
del_timer_sync(&host->timer);
if (host->dma.chan)
dma_release_channel(host->dma.chan);
free_irq(platform_get_irq(pdev, 0), host);
iounmap(host->regs);
clk_put(host->mck);
kfree(host);
return 0;
}

View file

@ -1028,9 +1028,12 @@ static int au1xmmc_probe(struct platform_device *pdev)
host->clk = clk_get(&pdev->dev, ALCHEMY_PERIPH_CLK);
if (IS_ERR(host->clk)) {
dev_err(&pdev->dev, "cannot find clock\n");
ret = PTR_ERR(host->clk);
goto out_irq;
}
if (clk_prepare_enable(host->clk)) {
ret = clk_prepare_enable(host->clk);
if (ret) {
dev_err(&pdev->dev, "cannot enable clock\n");
goto out_clk;
}

View file

@ -95,9 +95,6 @@ static int dw_mci_pci_resume(struct device *dev)
return dw_mci_resume(host);
}
#else
#define dw_mci_pci_suspend NULL
#define dw_mci_pci_resume NULL
#endif /* CONFIG_PM_SLEEP */
static SIMPLE_DEV_PM_OPS(dw_mci_pci_pmops, dw_mci_pci_suspend, dw_mci_pci_resume);

View file

@ -21,6 +21,7 @@
#include <linux/mmc/mmc.h>
#include <linux/mmc/dw_mmc.h>
#include <linux/of.h>
#include <linux/clk.h>
#include "dw_mmc.h"
#include "dw_mmc-pltfm.h"
@ -30,10 +31,6 @@ static void dw_mci_pltfm_prepare_command(struct dw_mci *host, u32 *cmdr)
*cmdr |= SDMMC_CMD_USE_HOLD_REG;
}
static const struct dw_mci_drv_data rockchip_drv_data = {
.prepare_command = dw_mci_pltfm_prepare_command,
};
static const struct dw_mci_drv_data socfpga_drv_data = {
.prepare_command = dw_mci_pltfm_prepare_command,
};
@ -84,9 +81,6 @@ static int dw_mci_pltfm_resume(struct device *dev)
return dw_mci_resume(host);
}
#else
#define dw_mci_pltfm_suspend NULL
#define dw_mci_pltfm_resume NULL
#endif /* CONFIG_PM_SLEEP */
SIMPLE_DEV_PM_OPS(dw_mci_pltfm_pmops, dw_mci_pltfm_suspend, dw_mci_pltfm_resume);
@ -94,8 +88,6 @@ EXPORT_SYMBOL_GPL(dw_mci_pltfm_pmops);
static const struct of_device_id dw_mci_pltfm_match[] = {
{ .compatible = "snps,dw-mshc", },
{ .compatible = "rockchip,rk2928-dw-mshc",
.data = &rockchip_drv_data },
{ .compatible = "altr,socfpga-dw-mshc",
.data = &socfpga_drv_data },
{},

View file

@ -0,0 +1,136 @@
/*
* Copyright (c) 2014, Fuzhou Rockchip Electronics Co., Ltd
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*/
#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/clk.h>
#include <linux/mmc/host.h>
#include <linux/mmc/dw_mmc.h>
#include <linux/of_address.h>
#include "dw_mmc.h"
#include "dw_mmc-pltfm.h"
#define RK3288_CLKGEN_DIV 2
static void dw_mci_rockchip_prepare_command(struct dw_mci *host, u32 *cmdr)
{
*cmdr |= SDMMC_CMD_USE_HOLD_REG;
}
static int dw_mci_rk3288_setup_clock(struct dw_mci *host)
{
host->bus_hz /= RK3288_CLKGEN_DIV;
return 0;
}
static void dw_mci_rk3288_set_ios(struct dw_mci *host, struct mmc_ios *ios)
{
int ret;
unsigned int cclkin;
u32 bus_hz;
/*
* cclkin: source clock of mmc controller
* bus_hz: card interface clock generated by CLKGEN
* bus_hz = cclkin / RK3288_CLKGEN_DIV
* ios->clock = (div == 0) ? bus_hz : (bus_hz / (2 * div))
*
* Note: div can only be 0 or 1
* if DDR50 8bit mode(only emmc work in 8bit mode),
* div must be set 1
*/
if (ios->bus_width == MMC_BUS_WIDTH_8 &&
ios->timing == MMC_TIMING_MMC_DDR52)
cclkin = 2 * ios->clock * RK3288_CLKGEN_DIV;
else
cclkin = ios->clock * RK3288_CLKGEN_DIV;
ret = clk_set_rate(host->ciu_clk, cclkin);
if (ret)
dev_warn(host->dev, "failed to set rate %uHz\n", ios->clock);
bus_hz = clk_get_rate(host->ciu_clk) / RK3288_CLKGEN_DIV;
if (bus_hz != host->bus_hz) {
host->bus_hz = bus_hz;
/* force dw_mci_setup_bus() */
host->current_speed = 0;
}
}
static const struct dw_mci_drv_data rk2928_drv_data = {
.prepare_command = dw_mci_rockchip_prepare_command,
};
static const struct dw_mci_drv_data rk3288_drv_data = {
.prepare_command = dw_mci_rockchip_prepare_command,
.set_ios = dw_mci_rk3288_set_ios,
.setup_clock = dw_mci_rk3288_setup_clock,
};
static const struct of_device_id dw_mci_rockchip_match[] = {
{ .compatible = "rockchip,rk2928-dw-mshc",
.data = &rk2928_drv_data },
{ .compatible = "rockchip,rk3288-dw-mshc",
.data = &rk3288_drv_data },
{},
};
MODULE_DEVICE_TABLE(of, dw_mci_rockchip_match);
static int dw_mci_rockchip_probe(struct platform_device *pdev)
{
const struct dw_mci_drv_data *drv_data;
const struct of_device_id *match;
if (!pdev->dev.of_node)
return -ENODEV;
match = of_match_node(dw_mci_rockchip_match, pdev->dev.of_node);
drv_data = match->data;
return dw_mci_pltfm_register(pdev, drv_data);
}
#ifdef CONFIG_PM_SLEEP
static int dw_mci_rockchip_suspend(struct device *dev)
{
struct dw_mci *host = dev_get_drvdata(dev);
return dw_mci_suspend(host);
}
static int dw_mci_rockchip_resume(struct device *dev)
{
struct dw_mci *host = dev_get_drvdata(dev);
return dw_mci_resume(host);
}
#endif /* CONFIG_PM_SLEEP */
static SIMPLE_DEV_PM_OPS(dw_mci_rockchip_pmops,
dw_mci_rockchip_suspend,
dw_mci_rockchip_resume);
static struct platform_driver dw_mci_rockchip_pltfm_driver = {
.probe = dw_mci_rockchip_probe,
.remove = __exit_p(dw_mci_pltfm_remove),
.driver = {
.name = "dwmmc_rockchip",
.of_match_table = dw_mci_rockchip_match,
.pm = &dw_mci_rockchip_pmops,
},
};
module_platform_driver(dw_mci_rockchip_pltfm_driver);
MODULE_AUTHOR("Addy Ke <addy.ke@rock-chips.com>");
MODULE_DESCRIPTION("Rockchip Specific DW-MSHC Driver Extension");
MODULE_ALIAS("platform:dwmmc-rockchip");
MODULE_LICENSE("GPL v2");

View file

@ -29,6 +29,7 @@
#include <linux/irq.h>
#include <linux/mmc/host.h>
#include <linux/mmc/mmc.h>
#include <linux/mmc/sd.h>
#include <linux/mmc/sdio.h>
#include <linux/mmc/dw_mmc.h>
#include <linux/bitops.h>
@ -81,36 +82,6 @@ struct idmac_desc {
};
#endif /* CONFIG_MMC_DW_IDMAC */
static const u8 tuning_blk_pattern_4bit[] = {
0xff, 0x0f, 0xff, 0x00, 0xff, 0xcc, 0xc3, 0xcc,
0xc3, 0x3c, 0xcc, 0xff, 0xfe, 0xff, 0xfe, 0xef,
0xff, 0xdf, 0xff, 0xdd, 0xff, 0xfb, 0xff, 0xfb,
0xbf, 0xff, 0x7f, 0xff, 0x77, 0xf7, 0xbd, 0xef,
0xff, 0xf0, 0xff, 0xf0, 0x0f, 0xfc, 0xcc, 0x3c,
0xcc, 0x33, 0xcc, 0xcf, 0xff, 0xef, 0xff, 0xee,
0xff, 0xfd, 0xff, 0xfd, 0xdf, 0xff, 0xbf, 0xff,
0xbb, 0xff, 0xf7, 0xff, 0xf7, 0x7f, 0x7b, 0xde,
};
static const u8 tuning_blk_pattern_8bit[] = {
0xff, 0xff, 0x00, 0xff, 0xff, 0xff, 0x00, 0x00,
0xff, 0xff, 0xcc, 0xcc, 0xcc, 0x33, 0xcc, 0xcc,
0xcc, 0x33, 0x33, 0xcc, 0xcc, 0xcc, 0xff, 0xff,
0xff, 0xee, 0xff, 0xff, 0xff, 0xee, 0xee, 0xff,
0xff, 0xff, 0xdd, 0xff, 0xff, 0xff, 0xdd, 0xdd,
0xff, 0xff, 0xff, 0xbb, 0xff, 0xff, 0xff, 0xbb,
0xbb, 0xff, 0xff, 0xff, 0x77, 0xff, 0xff, 0xff,
0x77, 0x77, 0xff, 0x77, 0xbb, 0xdd, 0xee, 0xff,
0xff, 0xff, 0xff, 0x00, 0xff, 0xff, 0xff, 0x00,
0x00, 0xff, 0xff, 0xcc, 0xcc, 0xcc, 0x33, 0xcc,
0xcc, 0xcc, 0x33, 0x33, 0xcc, 0xcc, 0xcc, 0xff,
0xff, 0xff, 0xee, 0xff, 0xff, 0xff, 0xee, 0xee,
0xff, 0xff, 0xff, 0xdd, 0xff, 0xff, 0xff, 0xdd,
0xdd, 0xff, 0xff, 0xff, 0xbb, 0xff, 0xff, 0xff,
0xbb, 0xbb, 0xff, 0xff, 0xff, 0x77, 0xff, 0xff,
0xff, 0x77, 0x77, 0xff, 0x77, 0xbb, 0xdd, 0xee,
};
static bool dw_mci_reset(struct dw_mci *host);
#if defined(CONFIG_DEBUG_FS)
@ -234,10 +205,13 @@ static void dw_mci_init_debugfs(struct dw_mci_slot *slot)
}
#endif /* defined(CONFIG_DEBUG_FS) */
static void mci_send_cmd(struct dw_mci_slot *slot, u32 cmd, u32 arg);
static u32 dw_mci_prepare_command(struct mmc_host *mmc, struct mmc_command *cmd)
{
struct mmc_data *data;
struct dw_mci_slot *slot = mmc_priv(mmc);
struct dw_mci *host = slot->host;
const struct dw_mci_drv_data *drv_data = slot->host->drv_data;
u32 cmdr;
cmd->error = -EINPROGRESS;
@ -253,6 +227,34 @@ static u32 dw_mci_prepare_command(struct mmc_host *mmc, struct mmc_command *cmd)
else if (cmd->opcode != MMC_SEND_STATUS && cmd->data)
cmdr |= SDMMC_CMD_PRV_DAT_WAIT;
if (cmd->opcode == SD_SWITCH_VOLTAGE) {
u32 clk_en_a;
/* Special bit makes CMD11 not die */
cmdr |= SDMMC_CMD_VOLT_SWITCH;
/* Change state to continue to handle CMD11 weirdness */
WARN_ON(slot->host->state != STATE_SENDING_CMD);
slot->host->state = STATE_SENDING_CMD11;
/*
* We need to disable low power mode (automatic clock stop)
* while doing voltage switch so we don't confuse the card,
* since stopping the clock is a specific part of the UHS
* voltage change dance.
*
* Note that low power mode (SDMMC_CLKEN_LOW_PWR) will be
* unconditionally turned back on in dw_mci_setup_bus() if it's
* ever called with a non-zero clock. That shouldn't happen
* until the voltage change is all done.
*/
clk_en_a = mci_readl(host, CLKENA);
clk_en_a &= ~(SDMMC_CLKEN_LOW_PWR << slot->id);
mci_writel(host, CLKENA, clk_en_a);
mci_send_cmd(slot, SDMMC_CMD_UPD_CLK |
SDMMC_CMD_PRV_DAT_WAIT, 0);
}
if (cmd->flags & MMC_RSP_PRESENT) {
/* We expect a response, so set this bit */
cmdr |= SDMMC_CMD_RESP_EXP;
@ -775,11 +777,15 @@ static void dw_mci_setup_bus(struct dw_mci_slot *slot, bool force_clkinit)
unsigned int clock = slot->clock;
u32 div;
u32 clk_en_a;
u32 sdmmc_cmd_bits = SDMMC_CMD_UPD_CLK | SDMMC_CMD_PRV_DAT_WAIT;
/* We must continue to set bit 28 in CMD until the change is complete */
if (host->state == STATE_WAITING_CMD11_DONE)
sdmmc_cmd_bits |= SDMMC_CMD_VOLT_SWITCH;
if (!clock) {
mci_writel(host, CLKENA, 0);
mci_send_cmd(slot,
SDMMC_CMD_UPD_CLK | SDMMC_CMD_PRV_DAT_WAIT, 0);
mci_send_cmd(slot, sdmmc_cmd_bits, 0);
} else if (clock != host->current_speed || force_clkinit) {
div = host->bus_hz / clock;
if (host->bus_hz % clock && host->bus_hz > clock)
@ -803,15 +809,13 @@ static void dw_mci_setup_bus(struct dw_mci_slot *slot, bool force_clkinit)
mci_writel(host, CLKSRC, 0);
/* inform CIU */
mci_send_cmd(slot,
SDMMC_CMD_UPD_CLK | SDMMC_CMD_PRV_DAT_WAIT, 0);
mci_send_cmd(slot, sdmmc_cmd_bits, 0);
/* set clock to desired speed */
mci_writel(host, CLKDIV, div);
/* inform CIU */
mci_send_cmd(slot,
SDMMC_CMD_UPD_CLK | SDMMC_CMD_PRV_DAT_WAIT, 0);
mci_send_cmd(slot, sdmmc_cmd_bits, 0);
/* enable clock; only low power if no SDIO */
clk_en_a = SDMMC_CLKEN_ENABLE << slot->id;
@ -820,8 +824,7 @@ static void dw_mci_setup_bus(struct dw_mci_slot *slot, bool force_clkinit)
mci_writel(host, CLKENA, clk_en_a);
/* inform CIU */
mci_send_cmd(slot,
SDMMC_CMD_UPD_CLK | SDMMC_CMD_PRV_DAT_WAIT, 0);
mci_send_cmd(slot, sdmmc_cmd_bits, 0);
/* keep the clock with reflecting clock dividor */
slot->__clk_old = clock << div;
@ -897,6 +900,17 @@ static void dw_mci_queue_request(struct dw_mci *host, struct dw_mci_slot *slot,
slot->mrq = mrq;
if (host->state == STATE_WAITING_CMD11_DONE) {
dev_warn(&slot->mmc->class_dev,
"Voltage change didn't complete\n");
/*
* this case isn't expected to happen, so we can
* either crash here or just try to continue on
* in the closest possible state
*/
host->state = STATE_IDLE;
}
if (host->state == STATE_IDLE) {
host->state = STATE_SENDING_CMD;
dw_mci_start_request(host, slot);
@ -936,6 +950,7 @@ static void dw_mci_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
struct dw_mci_slot *slot = mmc_priv(mmc);
const struct dw_mci_drv_data *drv_data = slot->host->drv_data;
u32 regs;
int ret;
switch (ios->bus_width) {
case MMC_BUS_WIDTH_4:
@ -972,14 +987,43 @@ static void dw_mci_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
/* Slot specific timing and width adjustment */
dw_mci_setup_bus(slot, false);
if (slot->host->state == STATE_WAITING_CMD11_DONE && ios->clock != 0)
slot->host->state = STATE_IDLE;
switch (ios->power_mode) {
case MMC_POWER_UP:
if (!IS_ERR(mmc->supply.vmmc)) {
ret = mmc_regulator_set_ocr(mmc, mmc->supply.vmmc,
ios->vdd);
if (ret) {
dev_err(slot->host->dev,
"failed to enable vmmc regulator\n");
/*return, if failed turn on vmmc*/
return;
}
}
if (!IS_ERR(mmc->supply.vqmmc) && !slot->host->vqmmc_enabled) {
ret = regulator_enable(mmc->supply.vqmmc);
if (ret < 0)
dev_err(slot->host->dev,
"failed to enable vqmmc regulator\n");
else
slot->host->vqmmc_enabled = true;
}
set_bit(DW_MMC_CARD_NEED_INIT, &slot->flags);
regs = mci_readl(slot->host, PWREN);
regs |= (1 << slot->id);
mci_writel(slot->host, PWREN, regs);
break;
case MMC_POWER_OFF:
if (!IS_ERR(mmc->supply.vmmc))
mmc_regulator_set_ocr(mmc, mmc->supply.vmmc, 0);
if (!IS_ERR(mmc->supply.vqmmc) && slot->host->vqmmc_enabled) {
regulator_disable(mmc->supply.vqmmc);
slot->host->vqmmc_enabled = false;
}
regs = mci_readl(slot->host, PWREN);
regs &= ~(1 << slot->id);
mci_writel(slot->host, PWREN, regs);
@ -989,6 +1033,59 @@ static void dw_mci_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
}
}
static int dw_mci_card_busy(struct mmc_host *mmc)
{
struct dw_mci_slot *slot = mmc_priv(mmc);
u32 status;
/*
* Check the busy bit which is low when DAT[3:0]
* (the data lines) are 0000
*/
status = mci_readl(slot->host, STATUS);
return !!(status & SDMMC_STATUS_BUSY);
}
static int dw_mci_switch_voltage(struct mmc_host *mmc, struct mmc_ios *ios)
{
struct dw_mci_slot *slot = mmc_priv(mmc);
struct dw_mci *host = slot->host;
u32 uhs;
u32 v18 = SDMMC_UHS_18V << slot->id;
int min_uv, max_uv;
int ret;
/*
* Program the voltage. Note that some instances of dw_mmc may use
* the UHS_REG for this. For other instances (like exynos) the UHS_REG
* does no harm but you need to set the regulator directly. Try both.
*/
uhs = mci_readl(host, UHS_REG);
if (ios->signal_voltage == MMC_SIGNAL_VOLTAGE_330) {
min_uv = 2700000;
max_uv = 3600000;
uhs &= ~v18;
} else {
min_uv = 1700000;
max_uv = 1950000;
uhs |= v18;
}
if (!IS_ERR(mmc->supply.vqmmc)) {
ret = regulator_set_voltage(mmc->supply.vqmmc, min_uv, max_uv);
if (ret) {
dev_err(&mmc->class_dev,
"Regulator set error %d: %d - %d\n",
ret, min_uv, max_uv);
return ret;
}
}
mci_writel(host, UHS_REG, uhs);
return 0;
}
static int dw_mci_get_ro(struct mmc_host *mmc)
{
int read_only;
@ -1131,6 +1228,9 @@ static const struct mmc_host_ops dw_mci_ops = {
.get_cd = dw_mci_get_cd,
.enable_sdio_irq = dw_mci_enable_sdio_irq,
.execute_tuning = dw_mci_execute_tuning,
.card_busy = dw_mci_card_busy,
.start_signal_voltage_switch = dw_mci_switch_voltage,
};
static void dw_mci_request_end(struct dw_mci *host, struct mmc_request *mrq)
@ -1154,7 +1254,11 @@ static void dw_mci_request_end(struct dw_mci *host, struct mmc_request *mrq)
dw_mci_start_request(host, slot);
} else {
dev_vdbg(host->dev, "list empty\n");
host->state = STATE_IDLE;
if (host->state == STATE_SENDING_CMD11)
host->state = STATE_WAITING_CMD11_DONE;
else
host->state = STATE_IDLE;
}
spin_unlock(&host->lock);
@ -1265,8 +1369,10 @@ static void dw_mci_tasklet_func(unsigned long priv)
switch (state) {
case STATE_IDLE:
case STATE_WAITING_CMD11_DONE:
break;
case STATE_SENDING_CMD11:
case STATE_SENDING_CMD:
if (!test_and_clear_bit(EVENT_CMD_COMPLETE,
&host->pending_events))
@ -1299,6 +1405,14 @@ static void dw_mci_tasklet_func(unsigned long priv)
/* fall through */
case STATE_SENDING_DATA:
/*
* We could get a data error and never a transfer
* complete so we'd better check for it here.
*
* Note that we don't really care if we also got a
* transfer complete; stopping the DMA and sending an
* abort won't hurt.
*/
if (test_and_clear_bit(EVENT_DATA_ERROR,
&host->pending_events)) {
dw_mci_stop_dma(host);
@ -1312,7 +1426,29 @@ static void dw_mci_tasklet_func(unsigned long priv)
break;
set_bit(EVENT_XFER_COMPLETE, &host->completed_events);
/*
* Handle an EVENT_DATA_ERROR that might have shown up
* before the transfer completed. This might not have
* been caught by the check above because the interrupt
* could have gone off between the previous check and
* the check for transfer complete.
*
* Technically this ought not be needed assuming we
* get a DATA_COMPLETE eventually (we'll notice the
* error and end the request), but it shouldn't hurt.
*
* This has the advantage of sending the stop command.
*/
if (test_and_clear_bit(EVENT_DATA_ERROR,
&host->pending_events)) {
dw_mci_stop_dma(host);
send_stop_abort(host, data);
state = STATE_DATA_ERROR;
break;
}
prev_state = state = STATE_DATA_BUSY;
/* fall through */
case STATE_DATA_BUSY:
@ -1335,6 +1471,22 @@ static void dw_mci_tasklet_func(unsigned long priv)
/* stop command for open-ended transfer*/
if (data->stop)
send_stop_abort(host, data);
} else {
/*
* If we don't have a command complete now we'll
* never get one since we just reset everything;
* better end the request.
*
* If we do have a command complete we'll fall
* through to the SENDING_STOP command and
* everything will be peachy keen.
*/
if (!test_bit(EVENT_CMD_COMPLETE,
&host->pending_events)) {
host->cmd = NULL;
dw_mci_request_end(host, mrq);
goto unlock;
}
}
/*
@ -1821,6 +1973,14 @@ static irqreturn_t dw_mci_interrupt(int irq, void *dev_id)
}
if (pending) {
/* Check volt switch first, since it can look like an error */
if ((host->state == STATE_SENDING_CMD11) &&
(pending & SDMMC_INT_VOLT_SWITCH)) {
mci_writel(host, RINTSTS, SDMMC_INT_VOLT_SWITCH);
pending &= ~SDMMC_INT_VOLT_SWITCH;
dw_mci_cmd_interrupt(host, pending);
}
if (pending & DW_MCI_CMD_ERROR_FLAGS) {
mci_writel(host, RINTSTS, DW_MCI_CMD_ERROR_FLAGS);
host->cmd_status = pending;
@ -1926,7 +2086,9 @@ static void dw_mci_work_routine_card(struct work_struct *work)
switch (host->state) {
case STATE_IDLE:
case STATE_WAITING_CMD11_DONE:
break;
case STATE_SENDING_CMD11:
case STATE_SENDING_CMD:
mrq->cmd->error = -ENOMEDIUM;
if (!mrq->data)
@ -2028,10 +2190,6 @@ static int dw_mci_of_get_slot_quirks(struct device *dev, u8 slot)
{
return 0;
}
static struct device_node *dw_mci_of_find_slot_node(struct device *dev, u8 slot)
{
return NULL;
}
#endif /* CONFIG_OF */
static int dw_mci_init_slot(struct dw_mci *host, unsigned int id)
@ -2064,7 +2222,13 @@ static int dw_mci_init_slot(struct dw_mci *host, unsigned int id)
mmc->f_max = freq[1];
}
mmc->ocr_avail = MMC_VDD_32_33 | MMC_VDD_33_34;
/*if there are external regulators, get them*/
ret = mmc_regulator_get_supply(mmc);
if (ret == -EPROBE_DEFER)
goto err_host_allocated;
if (!mmc->ocr_avail)
mmc->ocr_avail = MMC_VDD_32_33 | MMC_VDD_33_34;
if (host->pdata->caps)
mmc->caps = host->pdata->caps;
@ -2085,7 +2249,9 @@ static int dw_mci_init_slot(struct dw_mci *host, unsigned int id)
if (host->pdata->caps2)
mmc->caps2 = host->pdata->caps2;
mmc_of_parse(mmc);
ret = mmc_of_parse(mmc);
if (ret)
goto err_host_allocated;
if (host->pdata->blk_settings) {
mmc->max_segs = host->pdata->blk_settings->max_segs;
@ -2117,7 +2283,7 @@ static int dw_mci_init_slot(struct dw_mci *host, unsigned int id)
ret = mmc_add_host(mmc);
if (ret)
goto err_setup_bus;
goto err_host_allocated;
#if defined(CONFIG_DEBUG_FS)
dw_mci_init_debugfs(slot);
@ -2128,9 +2294,9 @@ static int dw_mci_init_slot(struct dw_mci *host, unsigned int id)
return 0;
err_setup_bus:
err_host_allocated:
mmc_free_host(mmc);
return -EINVAL;
return ret;
}
static void dw_mci_cleanup_slot(struct dw_mci_slot *slot, unsigned int id)
@ -2423,24 +2589,6 @@ int dw_mci_probe(struct dw_mci *host)
}
}
host->vmmc = devm_regulator_get_optional(host->dev, "vmmc");
if (IS_ERR(host->vmmc)) {
ret = PTR_ERR(host->vmmc);
if (ret == -EPROBE_DEFER)
goto err_clk_ciu;
dev_info(host->dev, "no vmmc regulator found: %d\n", ret);
host->vmmc = NULL;
} else {
ret = regulator_enable(host->vmmc);
if (ret) {
if (ret != -EPROBE_DEFER)
dev_err(host->dev,
"regulator_enable fail: %d\n", ret);
goto err_clk_ciu;
}
}
host->quirks = host->pdata->quirks;
spin_lock_init(&host->lock);
@ -2584,8 +2732,6 @@ int dw_mci_probe(struct dw_mci *host)
err_dmaunmap:
if (host->use_dma && host->dma_ops->exit)
host->dma_ops->exit(host);
if (host->vmmc)
regulator_disable(host->vmmc);
err_clk_ciu:
if (!IS_ERR(host->ciu_clk))
@ -2621,9 +2767,6 @@ void dw_mci_remove(struct dw_mci *host)
if (host->use_dma && host->dma_ops->exit)
host->dma_ops->exit(host);
if (host->vmmc)
regulator_disable(host->vmmc);
if (!IS_ERR(host->ciu_clk))
clk_disable_unprepare(host->ciu_clk);
@ -2640,9 +2783,6 @@ EXPORT_SYMBOL(dw_mci_remove);
*/
int dw_mci_suspend(struct dw_mci *host)
{
if (host->vmmc)
regulator_disable(host->vmmc);
return 0;
}
EXPORT_SYMBOL(dw_mci_suspend);
@ -2651,15 +2791,6 @@ int dw_mci_resume(struct dw_mci *host)
{
int i, ret;
if (host->vmmc) {
ret = regulator_enable(host->vmmc);
if (ret) {
dev_err(host->dev,
"failed to enable regulator: %d\n", ret);
return ret;
}
}
if (!dw_mci_ctrl_reset(host, SDMMC_CTRL_ALL_RESET_FLAGS)) {
ret = -ENODEV;
return ret;

View file

@ -99,6 +99,7 @@
#define SDMMC_INT_HLE BIT(12)
#define SDMMC_INT_FRUN BIT(11)
#define SDMMC_INT_HTO BIT(10)
#define SDMMC_INT_VOLT_SWITCH BIT(10) /* overloads bit 10! */
#define SDMMC_INT_DRTO BIT(9)
#define SDMMC_INT_RTO BIT(8)
#define SDMMC_INT_DCRC BIT(7)
@ -113,6 +114,7 @@
/* Command register defines */
#define SDMMC_CMD_START BIT(31)
#define SDMMC_CMD_USE_HOLD_REG BIT(29)
#define SDMMC_CMD_VOLT_SWITCH BIT(28)
#define SDMMC_CMD_CCS_EXP BIT(23)
#define SDMMC_CMD_CEATA_RD BIT(22)
#define SDMMC_CMD_UPD_CLK BIT(21)
@ -130,6 +132,7 @@
/* Status register defines */
#define SDMMC_GET_FCNT(x) (((x)>>17) & 0x1FFF)
#define SDMMC_STATUS_DMA_REQ BIT(31)
#define SDMMC_STATUS_BUSY BIT(9)
/* FIFOTH register defines */
#define SDMMC_SET_FIFOTH(m, r, t) (((m) & 0x7) << 28 | \
((r) & 0xFFF) << 16 | \
@ -150,7 +153,7 @@
#define SDMMC_GET_VERID(x) ((x) & 0xFFFF)
/* Card read threshold */
#define SDMMC_SET_RD_THLD(v, x) (((v) & 0x1FFF) << 16 | (x))
#define SDMMC_UHS_18V BIT(0)
/* All ctrl reset bits */
#define SDMMC_CTRL_ALL_RESET_FLAGS \
(SDMMC_CTRL_RESET | SDMMC_CTRL_FIFO_RESET | SDMMC_CTRL_DMA_RESET)

View file

@ -30,7 +30,9 @@
#include <asm/mach-jz4740/gpio.h>
#include <asm/cacheflush.h>
#include <linux/dma-mapping.h>
#include <linux/dmaengine.h>
#include <asm/mach-jz4740/dma.h>
#include <asm/mach-jz4740/jz4740_mmc.h>
#define JZ_REG_MMC_STRPCL 0x00
@ -112,6 +114,11 @@ enum jz4740_mmc_state {
JZ4740_MMC_STATE_DONE,
};
struct jz4740_mmc_host_next {
int sg_len;
s32 cookie;
};
struct jz4740_mmc_host {
struct mmc_host *mmc;
struct platform_device *pdev;
@ -122,6 +129,7 @@ struct jz4740_mmc_host {
int card_detect_irq;
void __iomem *base;
struct resource *mem_res;
struct mmc_request *req;
struct mmc_command *cmd;
@ -136,8 +144,220 @@ struct jz4740_mmc_host {
struct timer_list timeout_timer;
struct sg_mapping_iter miter;
enum jz4740_mmc_state state;
/* DMA support */
struct dma_chan *dma_rx;
struct dma_chan *dma_tx;
struct jz4740_mmc_host_next next_data;
bool use_dma;
int sg_len;
/* The DMA trigger level is 8 words, that is to say, the DMA read
* trigger is when data words in MSC_RXFIFO is >= 8 and the DMA write
* trigger is when data words in MSC_TXFIFO is < 8.
*/
#define JZ4740_MMC_FIFO_HALF_SIZE 8
};
/*----------------------------------------------------------------------------*/
/* DMA infrastructure */
static void jz4740_mmc_release_dma_channels(struct jz4740_mmc_host *host)
{
if (!host->use_dma)
return;
dma_release_channel(host->dma_tx);
dma_release_channel(host->dma_rx);
}
static int jz4740_mmc_acquire_dma_channels(struct jz4740_mmc_host *host)
{
dma_cap_mask_t mask;
dma_cap_zero(mask);
dma_cap_set(DMA_SLAVE, mask);
host->dma_tx = dma_request_channel(mask, NULL, host);
if (!host->dma_tx) {
dev_err(mmc_dev(host->mmc), "Failed to get dma_tx channel\n");
return -ENODEV;
}
host->dma_rx = dma_request_channel(mask, NULL, host);
if (!host->dma_rx) {
dev_err(mmc_dev(host->mmc), "Failed to get dma_rx channel\n");
goto free_master_write;
}
/* Initialize DMA pre request cookie */
host->next_data.cookie = 1;
return 0;
free_master_write:
dma_release_channel(host->dma_tx);
return -ENODEV;
}
static inline int jz4740_mmc_get_dma_dir(struct mmc_data *data)
{
return (data->flags & MMC_DATA_READ) ? DMA_FROM_DEVICE : DMA_TO_DEVICE;
}
static inline struct dma_chan *jz4740_mmc_get_dma_chan(struct jz4740_mmc_host *host,
struct mmc_data *data)
{
return (data->flags & MMC_DATA_READ) ? host->dma_rx : host->dma_tx;
}
static void jz4740_mmc_dma_unmap(struct jz4740_mmc_host *host,
struct mmc_data *data)
{
struct dma_chan *chan = jz4740_mmc_get_dma_chan(host, data);
enum dma_data_direction dir = jz4740_mmc_get_dma_dir(data);
dma_unmap_sg(chan->device->dev, data->sg, data->sg_len, dir);
}
/* Prepares DMA data for current/next transfer, returns non-zero on failure */
static int jz4740_mmc_prepare_dma_data(struct jz4740_mmc_host *host,
struct mmc_data *data,
struct jz4740_mmc_host_next *next,
struct dma_chan *chan)
{
struct jz4740_mmc_host_next *next_data = &host->next_data;
enum dma_data_direction dir = jz4740_mmc_get_dma_dir(data);
int sg_len;
if (!next && data->host_cookie &&
data->host_cookie != host->next_data.cookie) {
dev_warn(mmc_dev(host->mmc),
"[%s] invalid cookie: data->host_cookie %d host->next_data.cookie %d\n",
__func__,
data->host_cookie,
host->next_data.cookie);
data->host_cookie = 0;
}
/* Check if next job is already prepared */
if (next || data->host_cookie != host->next_data.cookie) {
sg_len = dma_map_sg(chan->device->dev,
data->sg,
data->sg_len,
dir);
} else {
sg_len = next_data->sg_len;
next_data->sg_len = 0;
}
if (sg_len <= 0) {
dev_err(mmc_dev(host->mmc),
"Failed to map scatterlist for DMA operation\n");
return -EINVAL;
}
if (next) {
next->sg_len = sg_len;
data->host_cookie = ++next->cookie < 0 ? 1 : next->cookie;
} else
host->sg_len = sg_len;
return 0;
}
static int jz4740_mmc_start_dma_transfer(struct jz4740_mmc_host *host,
struct mmc_data *data)
{
int ret;
struct dma_chan *chan;
struct dma_async_tx_descriptor *desc;
struct dma_slave_config conf = {
.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES,
.dst_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES,
.src_maxburst = JZ4740_MMC_FIFO_HALF_SIZE,
.dst_maxburst = JZ4740_MMC_FIFO_HALF_SIZE,
};
if (data->flags & MMC_DATA_WRITE) {
conf.direction = DMA_MEM_TO_DEV;
conf.dst_addr = host->mem_res->start + JZ_REG_MMC_TXFIFO;
conf.slave_id = JZ4740_DMA_TYPE_MMC_TRANSMIT;
chan = host->dma_tx;
} else {
conf.direction = DMA_DEV_TO_MEM;
conf.src_addr = host->mem_res->start + JZ_REG_MMC_RXFIFO;
conf.slave_id = JZ4740_DMA_TYPE_MMC_RECEIVE;
chan = host->dma_rx;
}
ret = jz4740_mmc_prepare_dma_data(host, data, NULL, chan);
if (ret)
return ret;
dmaengine_slave_config(chan, &conf);
desc = dmaengine_prep_slave_sg(chan,
data->sg,
host->sg_len,
conf.direction,
DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
if (!desc) {
dev_err(mmc_dev(host->mmc),
"Failed to allocate DMA %s descriptor",
conf.direction == DMA_MEM_TO_DEV ? "TX" : "RX");
goto dma_unmap;
}
dmaengine_submit(desc);
dma_async_issue_pending(chan);
return 0;
dma_unmap:
jz4740_mmc_dma_unmap(host, data);
return -ENOMEM;
}
static void jz4740_mmc_pre_request(struct mmc_host *mmc,
struct mmc_request *mrq,
bool is_first_req)
{
struct jz4740_mmc_host *host = mmc_priv(mmc);
struct mmc_data *data = mrq->data;
struct jz4740_mmc_host_next *next_data = &host->next_data;
BUG_ON(data->host_cookie);
if (host->use_dma) {
struct dma_chan *chan = jz4740_mmc_get_dma_chan(host, data);
if (jz4740_mmc_prepare_dma_data(host, data, next_data, chan))
data->host_cookie = 0;
}
}
static void jz4740_mmc_post_request(struct mmc_host *mmc,
struct mmc_request *mrq,
int err)
{
struct jz4740_mmc_host *host = mmc_priv(mmc);
struct mmc_data *data = mrq->data;
if (host->use_dma && data->host_cookie) {
jz4740_mmc_dma_unmap(host, data);
data->host_cookie = 0;
}
if (err) {
struct dma_chan *chan = jz4740_mmc_get_dma_chan(host, data);
dmaengine_terminate_all(chan);
}
}
/*----------------------------------------------------------------------------*/
static void jz4740_mmc_set_irq_enabled(struct jz4740_mmc_host *host,
unsigned int irq, bool enabled)
{
@ -442,6 +662,8 @@ static void jz4740_mmc_send_command(struct jz4740_mmc_host *host,
cmdat |= JZ_MMC_CMDAT_WRITE;
if (cmd->data->flags & MMC_DATA_STREAM)
cmdat |= JZ_MMC_CMDAT_STREAM;
if (host->use_dma)
cmdat |= JZ_MMC_CMDAT_DMA_EN;
writew(cmd->data->blksz, host->base + JZ_REG_MMC_BLKLEN);
writew(cmd->data->blocks, host->base + JZ_REG_MMC_NOB);
@ -474,6 +696,7 @@ static irqreturn_t jz_mmc_irq_worker(int irq, void *devid)
struct jz4740_mmc_host *host = (struct jz4740_mmc_host *)devid;
struct mmc_command *cmd = host->req->cmd;
struct mmc_request *req = host->req;
struct mmc_data *data = cmd->data;
bool timeout = false;
if (cmd->error)
@ -484,23 +707,37 @@ static irqreturn_t jz_mmc_irq_worker(int irq, void *devid)
if (cmd->flags & MMC_RSP_PRESENT)
jz4740_mmc_read_response(host, cmd);
if (!cmd->data)
if (!data)
break;
jz_mmc_prepare_data_transfer(host);
case JZ4740_MMC_STATE_TRANSFER_DATA:
if (cmd->data->flags & MMC_DATA_READ)
timeout = jz4740_mmc_read_data(host, cmd->data);
if (host->use_dma) {
/* Use DMA if enabled.
* Data transfer direction is defined later by
* relying on data flags in
* jz4740_mmc_prepare_dma_data() and
* jz4740_mmc_start_dma_transfer().
*/
timeout = jz4740_mmc_start_dma_transfer(host, data);
data->bytes_xfered = data->blocks * data->blksz;
} else if (data->flags & MMC_DATA_READ)
/* Use PIO if DMA is not enabled.
* Data transfer direction was defined before
* by relying on data flags in
* jz_mmc_prepare_data_transfer().
*/
timeout = jz4740_mmc_read_data(host, data);
else
timeout = jz4740_mmc_write_data(host, cmd->data);
timeout = jz4740_mmc_write_data(host, data);
if (unlikely(timeout)) {
host->state = JZ4740_MMC_STATE_TRANSFER_DATA;
break;
}
jz4740_mmc_transfer_check_state(host, cmd->data);
jz4740_mmc_transfer_check_state(host, data);
timeout = jz4740_mmc_poll_irq(host, JZ_MMC_IRQ_DATA_TRAN_DONE);
if (unlikely(timeout)) {
@ -664,6 +901,8 @@ static void jz4740_mmc_enable_sdio_irq(struct mmc_host *mmc, int enable)
static const struct mmc_host_ops jz4740_mmc_ops = {
.request = jz4740_mmc_request,
.pre_req = jz4740_mmc_pre_request,
.post_req = jz4740_mmc_post_request,
.set_ios = jz4740_mmc_set_ios,
.get_ro = mmc_gpio_get_ro,
.get_cd = mmc_gpio_get_cd,
@ -757,7 +996,6 @@ static int jz4740_mmc_probe(struct platform_device* pdev)
struct mmc_host *mmc;
struct jz4740_mmc_host *host;
struct jz4740_mmc_platform_data *pdata;
struct resource *res;
pdata = pdev->dev.platform_data;
@ -784,10 +1022,11 @@ static int jz4740_mmc_probe(struct platform_device* pdev)
goto err_free_host;
}
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
host->base = devm_ioremap_resource(&pdev->dev, res);
host->mem_res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
host->base = devm_ioremap_resource(&pdev->dev, host->mem_res);
if (IS_ERR(host->base)) {
ret = PTR_ERR(host->base);
dev_err(&pdev->dev, "Failed to ioremap base memory\n");
goto err_free_host;
}
@ -834,6 +1073,10 @@ static int jz4740_mmc_probe(struct platform_device* pdev)
/* It is not important when it times out, it just needs to timeout. */
set_timer_slack(&host->timeout_timer, HZ);
host->use_dma = true;
if (host->use_dma && jz4740_mmc_acquire_dma_channels(host) != 0)
host->use_dma = false;
platform_set_drvdata(pdev, host);
ret = mmc_add_host(mmc);
@ -843,6 +1086,10 @@ static int jz4740_mmc_probe(struct platform_device* pdev)
}
dev_info(&pdev->dev, "JZ SD/MMC card driver registered\n");
dev_info(&pdev->dev, "Using %s, %d-bit mode\n",
host->use_dma ? "DMA" : "PIO",
(mmc->caps & MMC_CAP_4_BIT_DATA) ? 4 : 1);
return 0;
err_free_irq:
@ -850,6 +1097,8 @@ static int jz4740_mmc_probe(struct platform_device* pdev)
err_free_gpios:
jz4740_mmc_free_gpios(pdev);
err_gpio_bulk_free:
if (host->use_dma)
jz4740_mmc_release_dma_channels(host);
jz_gpio_bulk_free(jz4740_mmc_pins, jz4740_mmc_num_pins(host));
err_free_host:
mmc_free_host(mmc);
@ -872,6 +1121,9 @@ static int jz4740_mmc_remove(struct platform_device *pdev)
jz4740_mmc_free_gpios(pdev);
jz_gpio_bulk_free(jz4740_mmc_pins, jz4740_mmc_num_pins(host));
if (host->use_dma)
jz4740_mmc_release_dma_channels(host);
mmc_free_host(host->mmc);
return 0;
@ -909,7 +1161,6 @@ static struct platform_driver jz4740_mmc_driver = {
.remove = jz4740_mmc_remove,
.driver = {
.name = "jz4740-mmc",
.owner = THIS_MODULE,
.pm = JZ4740_MMC_PM_OPS,
},
};

View file

@ -1436,6 +1436,7 @@ static int mmc_spi_probe(struct spi_device *spi)
host->pdata->cd_debounce);
if (status != 0)
goto fail_add_host;
mmc_gpiod_request_cd_irq(mmc);
}
if (host->pdata && host->pdata->flags & MMC_SPI_USE_RO_GPIO) {

View file

@ -43,6 +43,7 @@
#include <asm/sizes.h>
#include "mmci.h"
#include "mmci_qcom_dml.h"
#define DRIVER_NAME "mmci-pl18x"
@ -60,12 +61,13 @@ static unsigned int fmax = 515633;
* @fifohalfsize: number of bytes that can be written when MCI_TXFIFOHALFEMPTY
* is asserted (likewise for RX)
* @data_cmd_enable: enable value for data commands.
* @sdio: variant supports SDIO
* @st_sdio: enable ST specific SDIO logic
* @st_clkdiv: true if using a ST-specific clock divider algorithm
* @datactrl_mask_ddrmode: ddr mode mask in datactrl register.
* @blksz_datactrl16: true if Block size is at b16..b30 position in datactrl register
* @blksz_datactrl4: true if Block size is at b4..b16 position in datactrl
* register
* @datactrl_mask_sdio: SDIO enable mask in datactrl register
* @pwrreg_powerup: power up value for MMCIPOWER register
* @f_max: maximum clk frequency supported by the controller.
* @signal_direction: input/out direction of bus signals can be indicated
@ -74,6 +76,7 @@ static unsigned int fmax = 515633;
* @pwrreg_nopower: bits in MMCIPOWER don't controls ext. power supply
* @explicit_mclk_control: enable explicit mclk control in driver.
* @qcom_fifo: enables qcom specific fifo pio read logic.
* @qcom_dml: enables qcom specific dma glue for dma transfers.
* @reversed_irq_handling: handle data irq before cmd irq.
*/
struct variant_data {
@ -86,7 +89,8 @@ struct variant_data {
unsigned int fifohalfsize;
unsigned int data_cmd_enable;
unsigned int datactrl_mask_ddrmode;
bool sdio;
unsigned int datactrl_mask_sdio;
bool st_sdio;
bool st_clkdiv;
bool blksz_datactrl16;
bool blksz_datactrl4;
@ -98,6 +102,7 @@ struct variant_data {
bool pwrreg_nopower;
bool explicit_mclk_control;
bool qcom_fifo;
bool qcom_dml;
bool reversed_irq_handling;
};
@ -133,7 +138,8 @@ static struct variant_data variant_u300 = {
.clkreg_enable = MCI_ST_U300_HWFCEN,
.clkreg_8bit_bus_enable = MCI_ST_8BIT_BUS,
.datalength_bits = 16,
.sdio = true,
.datactrl_mask_sdio = MCI_ST_DPSM_SDIOEN,
.st_sdio = true,
.pwrreg_powerup = MCI_PWR_ON,
.f_max = 100000000,
.signal_direction = true,
@ -146,7 +152,8 @@ static struct variant_data variant_nomadik = {
.fifohalfsize = 8 * 4,
.clkreg = MCI_CLK_ENABLE,
.datalength_bits = 24,
.sdio = true,
.datactrl_mask_sdio = MCI_ST_DPSM_SDIOEN,
.st_sdio = true,
.st_clkdiv = true,
.pwrreg_powerup = MCI_PWR_ON,
.f_max = 100000000,
@ -163,7 +170,8 @@ static struct variant_data variant_ux500 = {
.clkreg_8bit_bus_enable = MCI_ST_8BIT_BUS,
.clkreg_neg_edge_enable = MCI_ST_UX500_NEG_EDGE,
.datalength_bits = 24,
.sdio = true,
.datactrl_mask_sdio = MCI_ST_DPSM_SDIOEN,
.st_sdio = true,
.st_clkdiv = true,
.pwrreg_powerup = MCI_PWR_ON,
.f_max = 100000000,
@ -182,7 +190,8 @@ static struct variant_data variant_ux500v2 = {
.clkreg_neg_edge_enable = MCI_ST_UX500_NEG_EDGE,
.datactrl_mask_ddrmode = MCI_ST_DPSM_DDRMODE,
.datalength_bits = 24,
.sdio = true,
.datactrl_mask_sdio = MCI_ST_DPSM_SDIOEN,
.st_sdio = true,
.st_clkdiv = true,
.blksz_datactrl16 = true,
.pwrreg_powerup = MCI_PWR_ON,
@ -208,6 +217,7 @@ static struct variant_data variant_qcom = {
.f_max = 208000000,
.explicit_mclk_control = true,
.qcom_fifo = true,
.qcom_dml = true,
};
static int mmci_card_busy(struct mmc_host *mmc)
@ -421,6 +431,7 @@ static void mmci_dma_setup(struct mmci_host *host)
{
const char *rxname, *txname;
dma_cap_mask_t mask;
struct variant_data *variant = host->variant;
host->dma_rx_channel = dma_request_slave_channel(mmc_dev(host->mmc), "rx");
host->dma_tx_channel = dma_request_slave_channel(mmc_dev(host->mmc), "tx");
@ -471,6 +482,10 @@ static void mmci_dma_setup(struct mmci_host *host)
if (max_seg_size < host->mmc->max_seg_size)
host->mmc->max_seg_size = max_seg_size;
}
if (variant->qcom_dml && host->dma_rx_channel && host->dma_tx_channel)
if (dml_hw_init(host, host->mmc->parent->of_node))
variant->qcom_dml = false;
}
/*
@ -572,6 +587,7 @@ static int __mmci_dma_prep_data(struct mmci_host *host, struct mmc_data *data,
struct dma_async_tx_descriptor *desc;
enum dma_data_direction buffer_dirn;
int nr_sg;
unsigned long flags = DMA_CTRL_ACK;
if (data->flags & MMC_DATA_READ) {
conf.direction = DMA_DEV_TO_MEM;
@ -596,9 +612,12 @@ static int __mmci_dma_prep_data(struct mmci_host *host, struct mmc_data *data,
if (nr_sg == 0)
return -EINVAL;
if (host->variant->qcom_dml)
flags |= DMA_PREP_INTERRUPT;
dmaengine_slave_config(chan, &conf);
desc = dmaengine_prep_slave_sg(chan, data->sg, nr_sg,
conf.direction, DMA_CTRL_ACK);
conf.direction, flags);
if (!desc)
goto unmap_exit;
@ -647,6 +666,9 @@ static int mmci_dma_start_data(struct mmci_host *host, unsigned int datactrl)
dmaengine_submit(host->dma_desc_current);
dma_async_issue_pending(host->dma_current);
if (host->variant->qcom_dml)
dml_start_xfer(host, data);
datactrl |= MCI_DPSM_DMAENABLE;
/* Trigger the DMA transfer */
@ -792,32 +814,26 @@ static void mmci_start_data(struct mmci_host *host, struct mmc_data *data)
if (data->flags & MMC_DATA_READ)
datactrl |= MCI_DPSM_DIRECTION;
/* The ST Micro variants has a special bit to enable SDIO */
if (variant->sdio && host->mmc->card)
if (mmc_card_sdio(host->mmc->card)) {
/*
* The ST Micro variants has a special bit
* to enable SDIO.
*/
u32 clk;
if (host->mmc->card && mmc_card_sdio(host->mmc->card)) {
u32 clk;
datactrl |= MCI_ST_DPSM_SDIOEN;
datactrl |= variant->datactrl_mask_sdio;
/*
* The ST Micro variant for SDIO small write transfers
* needs to have clock H/W flow control disabled,
* otherwise the transfer will not start. The threshold
* depends on the rate of MCLK.
*/
if (data->flags & MMC_DATA_WRITE &&
(host->size < 8 ||
(host->size <= 8 && host->mclk > 50000000)))
clk = host->clk_reg & ~variant->clkreg_enable;
else
clk = host->clk_reg | variant->clkreg_enable;
/*
* The ST Micro variant for SDIO small write transfers
* needs to have clock H/W flow control disabled,
* otherwise the transfer will not start. The threshold
* depends on the rate of MCLK.
*/
if (variant->st_sdio && data->flags & MMC_DATA_WRITE &&
(host->size < 8 ||
(host->size <= 8 && host->mclk > 50000000)))
clk = host->clk_reg & ~variant->clkreg_enable;
else
clk = host->clk_reg | variant->clkreg_enable;
mmci_write_clkreg(host, clk);
}
mmci_write_clkreg(host, clk);
}
if (host->mmc->ios.timing == MMC_TIMING_UHS_DDR50 ||
host->mmc->ios.timing == MMC_TIMING_MMC_DDR52)
@ -1658,16 +1674,35 @@ static int mmci_probe(struct amba_device *dev,
writel(0, host->base + MMCIMASK1);
writel(0xfff, host->base + MMCICLEAR);
/* If DT, cd/wp gpios must be supplied through it. */
if (!np && gpio_is_valid(plat->gpio_cd)) {
ret = mmc_gpio_request_cd(mmc, plat->gpio_cd, 0);
if (ret)
goto clk_disable;
}
if (!np && gpio_is_valid(plat->gpio_wp)) {
ret = mmc_gpio_request_ro(mmc, plat->gpio_wp);
if (ret)
goto clk_disable;
/*
* If:
* - not using DT but using a descriptor table, or
* - using a table of descriptors ALONGSIDE DT, or
* look up these descriptors named "cd" and "wp" right here, fail
* silently of these do not exist and proceed to try platform data
*/
if (!np) {
ret = mmc_gpiod_request_cd(mmc, "cd", 0, false, 0, NULL);
if (ret < 0) {
if (ret == -EPROBE_DEFER)
goto clk_disable;
else if (gpio_is_valid(plat->gpio_cd)) {
ret = mmc_gpio_request_cd(mmc, plat->gpio_cd, 0);
if (ret)
goto clk_disable;
}
}
ret = mmc_gpiod_request_ro(mmc, "wp", 0, false, 0, NULL);
if (ret < 0) {
if (ret == -EPROBE_DEFER)
goto clk_disable;
else if (gpio_is_valid(plat->gpio_wp)) {
ret = mmc_gpio_request_ro(mmc, plat->gpio_wp);
if (ret)
goto clk_disable;
}
}
}
ret = devm_request_irq(&dev->dev, dev->irq[0], mmci_irq, IRQF_SHARED,

View file

@ -0,0 +1,177 @@
/*
*
* Copyright (c) 2011, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#include <linux/of.h>
#include <linux/of_dma.h>
#include <linux/bitops.h>
#include <linux/mmc/host.h>
#include <linux/mmc/card.h>
#include "mmci.h"
/* Registers */
#define DML_CONFIG 0x00
#define PRODUCER_CRCI_MSK GENMASK(1, 0)
#define PRODUCER_CRCI_DISABLE 0
#define PRODUCER_CRCI_X_SEL BIT(0)
#define PRODUCER_CRCI_Y_SEL BIT(1)
#define CONSUMER_CRCI_MSK GENMASK(3, 2)
#define CONSUMER_CRCI_DISABLE 0
#define CONSUMER_CRCI_X_SEL BIT(2)
#define CONSUMER_CRCI_Y_SEL BIT(3)
#define PRODUCER_TRANS_END_EN BIT(4)
#define BYPASS BIT(16)
#define DIRECT_MODE BIT(17)
#define INFINITE_CONS_TRANS BIT(18)
#define DML_SW_RESET 0x08
#define DML_PRODUCER_START 0x0c
#define DML_CONSUMER_START 0x10
#define DML_PRODUCER_PIPE_LOGICAL_SIZE 0x14
#define DML_CONSUMER_PIPE_LOGICAL_SIZE 0x18
#define DML_PIPE_ID 0x1c
#define PRODUCER_PIPE_ID_SHFT 0
#define PRODUCER_PIPE_ID_MSK GENMASK(4, 0)
#define CONSUMER_PIPE_ID_SHFT 16
#define CONSUMER_PIPE_ID_MSK GENMASK(20, 16)
#define DML_PRODUCER_BAM_BLOCK_SIZE 0x24
#define DML_PRODUCER_BAM_TRANS_SIZE 0x28
/* other definitions */
#define PRODUCER_PIPE_LOGICAL_SIZE 4096
#define CONSUMER_PIPE_LOGICAL_SIZE 4096
#define DML_OFFSET 0x800
void dml_start_xfer(struct mmci_host *host, struct mmc_data *data)
{
u32 config;
void __iomem *base = host->base + DML_OFFSET;
if (data->flags & MMC_DATA_READ) {
/* Read operation: configure DML for producer operation */
/* Set producer CRCI-x and disable consumer CRCI */
config = readl_relaxed(base + DML_CONFIG);
config = (config & ~PRODUCER_CRCI_MSK) | PRODUCER_CRCI_X_SEL;
config = (config & ~CONSUMER_CRCI_MSK) | CONSUMER_CRCI_DISABLE;
writel_relaxed(config, base + DML_CONFIG);
/* Set the Producer BAM block size */
writel_relaxed(data->blksz, base + DML_PRODUCER_BAM_BLOCK_SIZE);
/* Set Producer BAM Transaction size */
writel_relaxed(data->blocks * data->blksz,
base + DML_PRODUCER_BAM_TRANS_SIZE);
/* Set Producer Transaction End bit */
config = readl_relaxed(base + DML_CONFIG);
config |= PRODUCER_TRANS_END_EN;
writel_relaxed(config, base + DML_CONFIG);
/* Trigger producer */
writel_relaxed(1, base + DML_PRODUCER_START);
} else {
/* Write operation: configure DML for consumer operation */
/* Set consumer CRCI-x and disable producer CRCI*/
config = readl_relaxed(base + DML_CONFIG);
config = (config & ~CONSUMER_CRCI_MSK) | CONSUMER_CRCI_X_SEL;
config = (config & ~PRODUCER_CRCI_MSK) | PRODUCER_CRCI_DISABLE;
writel_relaxed(config, base + DML_CONFIG);
/* Clear Producer Transaction End bit */
config = readl_relaxed(base + DML_CONFIG);
config &= ~PRODUCER_TRANS_END_EN;
writel_relaxed(config, base + DML_CONFIG);
/* Trigger consumer */
writel_relaxed(1, base + DML_CONSUMER_START);
}
/* make sure the dml is configured before dma is triggered */
wmb();
}
static int of_get_dml_pipe_index(struct device_node *np, const char *name)
{
int index;
struct of_phandle_args dma_spec;
index = of_property_match_string(np, "dma-names", name);
if (index < 0)
return -ENODEV;
if (of_parse_phandle_with_args(np, "dmas", "#dma-cells", index,
&dma_spec))
return -ENODEV;
if (dma_spec.args_count)
return dma_spec.args[0];
return -ENODEV;
}
/* Initialize the dml hardware connected to SD Card controller */
int dml_hw_init(struct mmci_host *host, struct device_node *np)
{
u32 config;
void __iomem *base;
int consumer_id, producer_id;
consumer_id = of_get_dml_pipe_index(np, "tx");
producer_id = of_get_dml_pipe_index(np, "rx");
if (producer_id < 0 || consumer_id < 0)
return -ENODEV;
base = host->base + DML_OFFSET;
/* Reset the DML block */
writel_relaxed(1, base + DML_SW_RESET);
/* Disable the producer and consumer CRCI */
config = (PRODUCER_CRCI_DISABLE | CONSUMER_CRCI_DISABLE);
/*
* Disable the bypass mode. Bypass mode will only be used
* if data transfer is to happen in PIO mode and don't
* want the BAM interface to connect with SDCC-DML.
*/
config &= ~BYPASS;
/*
* Disable direct mode as we don't DML to MASTER the AHB bus.
* BAM connected with DML should MASTER the AHB bus.
*/
config &= ~DIRECT_MODE;
/*
* Disable infinite mode transfer as we won't be doing any
* infinite size data transfers. All data transfer will be
* of finite data size.
*/
config &= ~INFINITE_CONS_TRANS;
writel_relaxed(config, base + DML_CONFIG);
/*
* Initialize the logical BAM pipe size for producer
* and consumer.
*/
writel_relaxed(PRODUCER_PIPE_LOGICAL_SIZE,
base + DML_PRODUCER_PIPE_LOGICAL_SIZE);
writel_relaxed(CONSUMER_PIPE_LOGICAL_SIZE,
base + DML_CONSUMER_PIPE_LOGICAL_SIZE);
/* Initialize Producer/consumer pipe id */
writel_relaxed(producer_id | (consumer_id << CONSUMER_PIPE_ID_SHFT),
base + DML_PIPE_ID);
/* Make sure dml intialization is finished */
mb();
return 0;
}

View file

@ -0,0 +1,31 @@
/*
*
* Copyright (c) 2011, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#ifndef __MMC_QCOM_DML_H__
#define __MMC_QCOM_DML_H__
#ifdef CONFIG_MMC_QCOM_DML
int dml_hw_init(struct mmci_host *host, struct device_node *np);
void dml_start_xfer(struct mmci_host *host, struct mmc_data *data);
#else
static inline int dml_hw_init(struct mmci_host *host, struct device_node *np)
{
return -ENOSYS;
}
static inline void dml_start_xfer(struct mmci_host *host, struct mmc_data *data)
{
}
#endif /* CONFIG_MMC_QCOM_DML */
#endif /* __MMC_QCOM_DML_H__ */

View file

@ -717,7 +717,6 @@ static struct platform_driver moxart_mmc_driver = {
.remove = moxart_remove,
.driver = {
.name = "mmc-moxart",
.owner = THIS_MODULE,
.of_match_table = moxart_mmc_match,
},
};

View file

@ -1238,7 +1238,6 @@ static struct platform_driver mxcmci_driver = {
.id_table = mxcmci_devtype,
.driver = {
.name = DRIVER_NAME,
.owner = THIS_MODULE,
.pm = &mxcmci_pm_ops,
.of_match_table = mxcmci_of_match,
}

View file

@ -735,7 +735,6 @@ static struct platform_driver mxs_mmc_driver = {
.id_table = mxs_ssp_ids,
.driver = {
.name = DRIVER_NAME,
.owner = THIS_MODULE,
#ifdef CONFIG_PM
.pm = &mxs_mmc_pm_ops,
#endif

View file

@ -1494,7 +1494,6 @@ static struct platform_driver mmc_omap_driver = {
.remove = mmc_omap_remove,
.driver = {
.name = DRIVER_NAME,
.owner = THIS_MODULE,
.of_match_table = of_match_ptr(mmc_omap_match),
},
};

View file

@ -1829,7 +1829,17 @@ static int omap_hsmmc_disable_fclk(struct mmc_host *mmc)
return 0;
}
static const struct mmc_host_ops omap_hsmmc_ops = {
static int omap_hsmmc_multi_io_quirk(struct mmc_card *card,
unsigned int direction, int blk_size)
{
/* This controller can't do multiblock reads due to hw bugs */
if (direction == MMC_DATA_READ)
return 1;
return blk_size;
}
static struct mmc_host_ops omap_hsmmc_ops = {
.enable = omap_hsmmc_enable_fclk,
.disable = omap_hsmmc_disable_fclk,
.post_req = omap_hsmmc_post_req,
@ -2101,7 +2111,7 @@ static int omap_hsmmc_probe(struct platform_device *pdev)
if (host->pdata->controller_flags & OMAP_HSMMC_BROKEN_MULTIBLOCK_READ) {
dev_info(&pdev->dev, "multiblock reads disabled due to 35xx erratum 2.1.1.128; MMC read performance may suffer\n");
mmc->caps2 |= MMC_CAP2_NO_MULTI_READ;
omap_hsmmc_ops.multi_io_quirk = omap_hsmmc_multi_io_quirk;
}
pm_runtime_enable(host->dev);
@ -2489,7 +2499,6 @@ static struct platform_driver omap_hsmmc_driver = {
.remove = omap_hsmmc_remove,
.driver = {
.name = DRIVER_NAME,
.owner = THIS_MODULE,
.pm = &omap_hsmmc_dev_pm_ops,
.of_match_table = of_match_ptr(omap_mmc_of_match),
},

View file

@ -474,7 +474,7 @@ static void pxamci_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
unsigned int clk = rate / ios->clock;
if (host->clkrt == CLKRT_OFF)
clk_enable(host->clk);
clk_prepare_enable(host->clk);
if (ios->clock == 26000000) {
/* to support 26MHz */
@ -501,7 +501,7 @@ static void pxamci_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
pxamci_stop_clock(host);
if (host->clkrt != CLKRT_OFF) {
host->clkrt = CLKRT_OFF;
clk_disable(host->clk);
clk_disable_unprepare(host->clk);
}
}
@ -885,7 +885,6 @@ static struct platform_driver pxamci_driver = {
.remove = pxamci_remove,
.driver = {
.name = DRIVER_NAME,
.owner = THIS_MODULE,
.of_match_table = of_match_ptr(pxa_mmc_dt_ids),
},
};

View file

@ -412,6 +412,13 @@ static void sd_send_cmd_get_rsp(struct realtek_pci_sdmmc *host,
}
if (rsp_type == SD_RSP_TYPE_R2) {
/*
* The controller offloads the last byte {CRC-7, end bit 1'b1}
* of response type R2. Assign dummy CRC, 0, and end bit to the
* byte(ptr[16], goes into the LSB of resp[3] later).
*/
ptr[16] = 1;
for (i = 0; i < 4; i++) {
cmd->resp[i] = get_unaligned_be32(ptr + 1 + i * 4);
dev_dbg(sdmmc_dev(host), "cmd->resp[%d] = 0x%08x\n",
@ -1292,6 +1299,7 @@ static void realtek_init_host(struct realtek_pci_sdmmc *host)
mmc->caps = MMC_CAP_4_BIT_DATA | MMC_CAP_SD_HIGHSPEED |
MMC_CAP_MMC_HIGHSPEED | MMC_CAP_BUS_WIDTH_TEST |
MMC_CAP_UHS_SDR12 | MMC_CAP_UHS_SDR25;
mmc->caps2 = MMC_CAP2_NO_PRESCAN_POWERUP | MMC_CAP2_FULL_PWR_CYCLE;
mmc->max_current_330 = 400;
mmc->max_current_180 = 800;
mmc->ops = &realtek_pci_sdmmc_ops;
@ -1416,7 +1424,6 @@ static struct platform_driver rtsx_pci_sdmmc_driver = {
.remove = rtsx_pci_sdmmc_drv_remove,
.id_table = rtsx_pci_sdmmc_ids,
.driver = {
.owner = THIS_MODULE,
.name = DRV_NAME_RTSX_PCI_SDMMC,
},
};

View file

@ -435,6 +435,13 @@ static void sd_send_cmd_get_rsp(struct rtsx_usb_sdmmc *host,
}
if (rsp_type == SD_RSP_TYPE_R2) {
/*
* The controller offloads the last byte {CRC-7, end bit 1'b1}
* of response type R2. Assign dummy CRC, 0, and end bit to the
* byte(ptr[16], goes into the LSB of resp[3] later).
*/
ptr[16] = 1;
for (i = 0; i < 4; i++) {
cmd->resp[i] = get_unaligned_be32(ptr + 1 + i * 4);
dev_dbg(sdmmc_dev(host), "cmd->resp[%d] = 0x%08x\n",
@ -1329,6 +1336,7 @@ static void rtsx_usb_init_host(struct rtsx_usb_sdmmc *host)
MMC_CAP_MMC_HIGHSPEED | MMC_CAP_BUS_WIDTH_TEST |
MMC_CAP_UHS_SDR12 | MMC_CAP_UHS_SDR25 | MMC_CAP_UHS_SDR50 |
MMC_CAP_NEEDS_POLL;
mmc->caps2 = MMC_CAP2_NO_PRESCAN_POWERUP | MMC_CAP2_FULL_PWR_CYCLE;
mmc->max_current_330 = 400;
mmc->max_current_180 = 800;
@ -1445,7 +1453,6 @@ static struct platform_driver rtsx_usb_sdmmc_driver = {
.remove = rtsx_usb_sdmmc_drv_remove,
.id_table = rtsx_usb_sdmmc_ids,
.driver = {
.owner = THIS_MODULE,
.name = "rtsx_usb_sdmmc",
},
};

View file

@ -985,7 +985,8 @@ static int s3cmci_setup_data(struct s3cmci_host *host, struct mmc_data *data)
* one block being transferred. */
if (data->blocks > 1) {
pr_warning("%s: can't do non-word sized block transfers (blksz %d)\n", __func__, data->blksz);
pr_warn("%s: can't do non-word sized block transfers (blksz %d)\n",
__func__, data->blksz);
return -EINVAL;
}
}
@ -1874,7 +1875,6 @@ MODULE_DEVICE_TABLE(platform, s3cmci_driver_ids);
static struct platform_driver s3cmci_driver = {
.driver = {
.name = "s3c-sdi",
.owner = THIS_MODULE,
},
.id_table = s3cmci_driver_ids,
.probe = s3cmci_probe,

View file

@ -67,6 +67,8 @@ struct sdhci_acpi_slot {
unsigned int caps2;
mmc_pm_flag_t pm_caps;
unsigned int flags;
int (*probe_slot)(struct platform_device *, const char *, const char *);
int (*remove_slot)(struct platform_device *);
};
struct sdhci_acpi_host {
@ -122,13 +124,67 @@ static const struct sdhci_acpi_chip sdhci_acpi_chip_int = {
.ops = &sdhci_acpi_ops_int,
};
static int sdhci_acpi_emmc_probe_slot(struct platform_device *pdev,
const char *hid, const char *uid)
{
struct sdhci_acpi_host *c = platform_get_drvdata(pdev);
struct sdhci_host *host;
if (!c || !c->host)
return 0;
host = c->host;
/* Platform specific code during emmc proble slot goes here */
if (hid && uid && !strcmp(hid, "80860F14") && !strcmp(uid, "1") &&
sdhci_readl(host, SDHCI_CAPABILITIES) == 0x446cc8b2 &&
sdhci_readl(host, SDHCI_CAPABILITIES_1) == 0x00000807)
host->timeout_clk = 1000; /* 1000 kHz i.e. 1 MHz */
return 0;
}
static int sdhci_acpi_sdio_probe_slot(struct platform_device *pdev,
const char *hid, const char *uid)
{
struct sdhci_acpi_host *c = platform_get_drvdata(pdev);
struct sdhci_host *host;
if (!c || !c->host)
return 0;
host = c->host;
/* Platform specific code during emmc proble slot goes here */
return 0;
}
static int sdhci_acpi_sd_probe_slot(struct platform_device *pdev,
const char *hid, const char *uid)
{
struct sdhci_acpi_host *c = platform_get_drvdata(pdev);
struct sdhci_host *host;
if (!c || !c->host || !c->slot)
return 0;
host = c->host;
/* Platform specific code during emmc proble slot goes here */
return 0;
}
static const struct sdhci_acpi_slot sdhci_acpi_slot_int_emmc = {
.chip = &sdhci_acpi_chip_int,
.caps = MMC_CAP_8_BIT_DATA | MMC_CAP_NONREMOVABLE |
MMC_CAP_HW_RESET | MMC_CAP_1_8V_DDR,
.caps2 = MMC_CAP2_HC_ERASE_SZ,
.flags = SDHCI_ACPI_RUNTIME_PM,
.quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN,
.quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN | SDHCI_QUIRK2_STOP_WITH_TC,
.probe_slot = sdhci_acpi_emmc_probe_slot,
};
static const struct sdhci_acpi_slot sdhci_acpi_slot_int_sdio = {
@ -137,12 +193,15 @@ static const struct sdhci_acpi_slot sdhci_acpi_slot_int_sdio = {
.caps = MMC_CAP_NONREMOVABLE | MMC_CAP_POWER_OFF_CARD,
.flags = SDHCI_ACPI_RUNTIME_PM,
.pm_caps = MMC_PM_KEEP_POWER,
.probe_slot = sdhci_acpi_sdio_probe_slot,
};
static const struct sdhci_acpi_slot sdhci_acpi_slot_int_sd = {
.flags = SDHCI_ACPI_SD_CD | SDHCI_ACPI_SD_CD_OVERRIDE_LEVEL |
SDHCI_ACPI_RUNTIME_PM,
.quirks2 = SDHCI_QUIRK2_CARD_ON_NEEDS_BUS_ON,
.quirks2 = SDHCI_QUIRK2_CARD_ON_NEEDS_BUS_ON |
SDHCI_QUIRK2_STOP_WITH_TC,
.probe_slot = sdhci_acpi_sd_probe_slot,
};
struct sdhci_acpi_uid_slot {
@ -156,6 +215,7 @@ static const struct sdhci_acpi_uid_slot sdhci_acpi_uids[] = {
{ "80860F14" , "3" , &sdhci_acpi_slot_int_sd },
{ "80860F16" , NULL, &sdhci_acpi_slot_int_sd },
{ "INT33BB" , "2" , &sdhci_acpi_slot_int_sdio },
{ "INT33BB" , "3" , &sdhci_acpi_slot_int_sd },
{ "INT33C6" , NULL, &sdhci_acpi_slot_int_sdio },
{ "INT3436" , NULL, &sdhci_acpi_slot_int_sdio },
{ "PNP0D40" },
@ -173,8 +233,8 @@ static const struct acpi_device_id sdhci_acpi_ids[] = {
};
MODULE_DEVICE_TABLE(acpi, sdhci_acpi_ids);
static const struct sdhci_acpi_slot *sdhci_acpi_get_slot_by_ids(const char *hid,
const char *uid)
static const struct sdhci_acpi_slot *sdhci_acpi_get_slot(const char *hid,
const char *uid)
{
const struct sdhci_acpi_uid_slot *u;
@ -189,24 +249,6 @@ static const struct sdhci_acpi_slot *sdhci_acpi_get_slot_by_ids(const char *hid,
return NULL;
}
static const struct sdhci_acpi_slot *sdhci_acpi_get_slot(acpi_handle handle,
const char *hid)
{
const struct sdhci_acpi_slot *slot;
struct acpi_device_info *info;
const char *uid = NULL;
acpi_status status;
status = acpi_get_object_info(handle, &info);
if (!ACPI_FAILURE(status) && (info->valid & ACPI_VALID_UID))
uid = info->unique_id.string;
slot = sdhci_acpi_get_slot_by_ids(hid, uid);
kfree(info);
return slot;
}
static int sdhci_acpi_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
@ -217,6 +259,7 @@ static int sdhci_acpi_probe(struct platform_device *pdev)
struct resource *iomem;
resource_size_t len;
const char *hid;
const char *uid;
int err;
if (acpi_bus_get_device(handle, &device))
@ -226,6 +269,7 @@ static int sdhci_acpi_probe(struct platform_device *pdev)
return -ENODEV;
hid = acpi_device_hid(device);
uid = device->pnp.unique_id;
iomem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (!iomem)
@ -244,7 +288,7 @@ static int sdhci_acpi_probe(struct platform_device *pdev)
c = sdhci_priv(host);
c->host = host;
c->slot = sdhci_acpi_get_slot(handle, hid);
c->slot = sdhci_acpi_get_slot(hid, uid);
c->pdev = pdev;
c->use_runtime_pm = sdhci_acpi_flag(c, SDHCI_ACPI_RUNTIME_PM);
@ -277,6 +321,11 @@ static int sdhci_acpi_probe(struct platform_device *pdev)
}
if (c->slot) {
if (c->slot->probe_slot) {
err = c->slot->probe_slot(pdev, hid, uid);
if (err)
goto err_free;
}
if (c->slot->chip) {
host->ops = c->slot->chip->ops;
host->quirks |= c->slot->chip->quirks;
@ -297,7 +346,7 @@ static int sdhci_acpi_probe(struct platform_device *pdev)
if (sdhci_acpi_flag(c, SDHCI_ACPI_SD_CD)) {
bool v = sdhci_acpi_flag(c, SDHCI_ACPI_SD_CD_OVERRIDE_LEVEL);
if (mmc_gpiod_request_cd(host->mmc, NULL, 0, v, 0)) {
if (mmc_gpiod_request_cd(host->mmc, NULL, 0, v, 0, NULL)) {
dev_warn(dev, "failed to setup card detect gpio\n");
c->use_runtime_pm = false;
}
@ -334,6 +383,9 @@ static int sdhci_acpi_remove(struct platform_device *pdev)
pm_runtime_put_noidle(dev);
}
if (c->slot && c->slot->remove_slot)
c->slot->remove_slot(pdev);
dead = (sdhci_readl(c->host, SDHCI_INT_STATUS) == ~0);
sdhci_remove_host(c->host, dead);
sdhci_free_host(c->host);
@ -385,20 +437,13 @@ static int sdhci_acpi_runtime_idle(struct device *dev)
return 0;
}
#else
#define sdhci_acpi_runtime_suspend NULL
#define sdhci_acpi_runtime_resume NULL
#define sdhci_acpi_runtime_idle NULL
#endif
static const struct dev_pm_ops sdhci_acpi_pm_ops = {
.suspend = sdhci_acpi_suspend,
.resume = sdhci_acpi_resume,
.runtime_suspend = sdhci_acpi_runtime_suspend,
.runtime_resume = sdhci_acpi_runtime_resume,
.runtime_idle = sdhci_acpi_runtime_idle,
SET_RUNTIME_PM_OPS(sdhci_acpi_runtime_suspend,
sdhci_acpi_runtime_resume, sdhci_acpi_runtime_idle)
};
static struct platform_driver sdhci_acpi_driver = {

View file

@ -225,7 +225,7 @@ static struct sdhci_pltfm_data sdhci_pltfm_data_kona = {
SDHCI_QUIRK_CAP_CLOCK_BASE_BROKEN,
};
static struct __initconst of_device_id sdhci_bcm_kona_of_match[] = {
static const struct of_device_id sdhci_bcm_kona_of_match[] = {
{ .compatible = "brcm,kona-sdhci"},
{ .compatible = "bcm,kona-sdhci"}, /* deprecated name */
{}
@ -359,7 +359,6 @@ static int sdhci_bcm_kona_remove(struct platform_device *pdev)
static struct platform_driver sdhci_bcm_kona_driver = {
.driver = {
.name = "sdhci-kona",
.owner = THIS_MODULE,
.pm = SDHCI_PLTFM_PMOPS,
.of_match_table = sdhci_bcm_kona_of_match,
},

View file

@ -194,7 +194,6 @@ MODULE_DEVICE_TABLE(of, bcm2835_sdhci_of_match);
static struct platform_driver bcm2835_sdhci_driver = {
.driver = {
.name = "sdhci-bcm2835",
.owner = THIS_MODULE,
.of_match_table = bcm2835_sdhci_of_match,
.pm = SDHCI_PLTFM_PMOPS,
},

View file

@ -106,7 +106,6 @@ static int sdhci_cns3xxx_remove(struct platform_device *pdev)
static struct platform_driver sdhci_cns3xxx_driver = {
.driver = {
.name = "sdhci-cns3xxx",
.owner = THIS_MODULE,
.pm = SDHCI_PLTFM_PMOPS,
},
.probe = sdhci_cns3xxx_probe,

View file

@ -146,7 +146,6 @@ MODULE_DEVICE_TABLE(of, sdhci_dove_of_match_table);
static struct platform_driver sdhci_dove_driver = {
.driver = {
.name = "sdhci-dove",
.owner = THIS_MODULE,
.pm = SDHCI_PLTFM_PMOPS,
.of_match_table = sdhci_dove_of_match_table,
},

View file

@ -880,6 +880,24 @@ static void esdhc_reset(struct sdhci_host *host, u8 mask)
sdhci_writel(host, host->ier, SDHCI_SIGNAL_ENABLE);
}
static unsigned int esdhc_get_max_timeout_count(struct sdhci_host *host)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct pltfm_imx_data *imx_data = pltfm_host->priv;
return esdhc_is_usdhc(imx_data) ? 1 << 28 : 1 << 27;
}
static void esdhc_set_timeout(struct sdhci_host *host, struct mmc_command *cmd)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct pltfm_imx_data *imx_data = pltfm_host->priv;
/* use maximum timeout counter */
sdhci_writeb(host, esdhc_is_usdhc(imx_data) ? 0xF : 0xE,
SDHCI_TIMEOUT_CONTROL);
}
static struct sdhci_ops sdhci_esdhc_ops = {
.read_l = esdhc_readl_le,
.read_w = esdhc_readw_le,
@ -889,7 +907,9 @@ static struct sdhci_ops sdhci_esdhc_ops = {
.set_clock = esdhc_pltfm_set_clock,
.get_max_clock = esdhc_pltfm_get_max_clock,
.get_min_clock = esdhc_pltfm_get_min_clock,
.get_max_timeout_count = esdhc_get_max_timeout_count,
.get_ro = esdhc_pltfm_get_ro,
.set_timeout = esdhc_set_timeout,
.set_bus_width = esdhc_pltfm_set_bus_width,
.set_uhs_signaling = esdhc_set_uhs_signaling,
.reset = esdhc_reset,
@ -1207,7 +1227,6 @@ static const struct dev_pm_ops sdhci_esdhc_pmops = {
static struct platform_driver sdhci_esdhc_imx_driver = {
.driver = {
.name = "sdhci-esdhc-imx",
.owner = THIS_MODULE,
.of_match_table = imx_esdhc_dt_ids,
.pm = &sdhci_esdhc_pmops,
},

View file

@ -46,24 +46,6 @@
#define CMUX_SHIFT_PHASE_SHIFT 24
#define CMUX_SHIFT_PHASE_MASK (7 << CMUX_SHIFT_PHASE_SHIFT)
static const u32 tuning_block_64[] = {
0x00ff0fff, 0xccc3ccff, 0xffcc3cc3, 0xeffefffe,
0xddffdfff, 0xfbfffbff, 0xff7fffbf, 0xefbdf777,
0xf0fff0ff, 0x3cccfc0f, 0xcfcc33cc, 0xeeffefff,
0xfdfffdff, 0xffbfffdf, 0xfff7ffbb, 0xde7b7ff7
};
static const u32 tuning_block_128[] = {
0xff00ffff, 0x0000ffff, 0xccccffff, 0xcccc33cc,
0xcc3333cc, 0xffffcccc, 0xffffeeff, 0xffeeeeff,
0xffddffff, 0xddddffff, 0xbbffffff, 0xbbffffff,
0xffffffbb, 0xffffff77, 0x77ff7777, 0xffeeddbb,
0x00ffffff, 0x00ffffff, 0xccffff00, 0xcc33cccc,
0x3333cccc, 0xffcccccc, 0xffeeffff, 0xeeeeffff,
0xddffffff, 0xddffffff, 0xffffffdd, 0xffffffbb,
0xffffbbbb, 0xffff77ff, 0xff7777ff, 0xeeddbb77
};
struct sdhci_msm_host {
struct platform_device *pdev;
void __iomem *core_mem; /* MSM SDCC mapped address */
@ -358,8 +340,8 @@ static int sdhci_msm_execute_tuning(struct sdhci_host *host, u32 opcode)
{
int tuning_seq_cnt = 3;
u8 phase, *data_buf, tuned_phases[16], tuned_phase_cnt = 0;
const u32 *tuning_block_pattern = tuning_block_64;
int size = sizeof(tuning_block_64); /* Pattern size in bytes */
const u8 *tuning_block_pattern = tuning_blk_pattern_4bit;
int size = sizeof(tuning_blk_pattern_4bit);
int rc;
struct mmc_host *mmc = host->mmc;
struct mmc_ios ios = host->mmc->ios;
@ -375,8 +357,8 @@ static int sdhci_msm_execute_tuning(struct sdhci_host *host, u32 opcode)
if ((opcode == MMC_SEND_TUNING_BLOCK_HS200) &&
(mmc->ios.bus_width == MMC_BUS_WIDTH_8)) {
tuning_block_pattern = tuning_block_128;
size = sizeof(tuning_block_128);
tuning_block_pattern = tuning_blk_pattern_8bit;
size = sizeof(tuning_blk_pattern_8bit);
}
data_buf = kmalloc(size, GFP_KERNEL);
@ -610,7 +592,6 @@ static struct platform_driver sdhci_msm_driver = {
.remove = sdhci_msm_remove,
.driver = {
.name = "sdhci_msm",
.owner = THIS_MODULE,
.of_match_table = sdhci_msm_dt_match,
},
};

View file

@ -213,7 +213,6 @@ MODULE_DEVICE_TABLE(of, sdhci_arasan_of_match);
static struct platform_driver sdhci_arasan_driver = {
.driver = {
.name = "sdhci-arasan",
.owner = THIS_MODULE,
.of_match_table = sdhci_arasan_of_match,
.pm = &sdhci_arasan_dev_pm_ops,
},

View file

@ -388,7 +388,6 @@ MODULE_DEVICE_TABLE(of, sdhci_esdhc_of_match);
static struct platform_driver sdhci_esdhc_driver = {
.driver = {
.name = "sdhci-esdhc",
.owner = THIS_MODULE,
.of_match_table = sdhci_esdhc_of_match,
.pm = ESDHC_PMOPS,
},

View file

@ -89,7 +89,6 @@ MODULE_DEVICE_TABLE(of, sdhci_hlwd_of_match);
static struct platform_driver sdhci_hlwd_driver = {
.driver = {
.name = "sdhci-hlwd",
.owner = THIS_MODULE,
.of_match_table = sdhci_hlwd_of_match,
.pm = SDHCI_PLTFM_PMOPS,
},

View file

@ -24,6 +24,7 @@
#include <linux/io.h>
#include <linux/gpio.h>
#include <linux/pm_runtime.h>
#include <linux/mmc/slot-gpio.h>
#include <linux/mmc/sdhci-pci-data.h>
#include "sdhci.h"
@ -271,6 +272,8 @@ static int byt_emmc_probe_slot(struct sdhci_pci_slot *slot)
MMC_CAP_HW_RESET | MMC_CAP_1_8V_DDR;
slot->host->mmc->caps2 |= MMC_CAP2_HC_ERASE_SZ;
slot->hw_reset = sdhci_pci_int_hw_reset;
if (slot->chip->pdev->device == PCI_DEVICE_ID_INTEL_BSW_EMMC)
slot->host->timeout_clk = 1000; /* 1000 kHz i.e. 1 MHz */
return 0;
}
@ -280,22 +283,35 @@ static int byt_sdio_probe_slot(struct sdhci_pci_slot *slot)
return 0;
}
static int byt_sd_probe_slot(struct sdhci_pci_slot *slot)
{
slot->cd_con_id = NULL;
slot->cd_idx = 0;
slot->cd_override_level = true;
return 0;
}
static const struct sdhci_pci_fixes sdhci_intel_byt_emmc = {
.allow_runtime_pm = true,
.probe_slot = byt_emmc_probe_slot,
.quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN,
.quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN |
SDHCI_QUIRK2_STOP_WITH_TC,
};
static const struct sdhci_pci_fixes sdhci_intel_byt_sdio = {
.quirks2 = SDHCI_QUIRK2_HOST_OFF_CARD_ON,
.quirks2 = SDHCI_QUIRK2_HOST_OFF_CARD_ON |
SDHCI_QUIRK2_PRESET_VALUE_BROKEN,
.allow_runtime_pm = true,
.probe_slot = byt_sdio_probe_slot,
};
static const struct sdhci_pci_fixes sdhci_intel_byt_sd = {
.quirks2 = SDHCI_QUIRK2_CARD_ON_NEEDS_BUS_ON,
.quirks2 = SDHCI_QUIRK2_CARD_ON_NEEDS_BUS_ON |
SDHCI_QUIRK2_PRESET_VALUE_BROKEN |
SDHCI_QUIRK2_STOP_WITH_TC,
.allow_runtime_pm = true,
.own_cd_for_runtime_pm = true,
.probe_slot = byt_sd_probe_slot,
};
/* Define Host controllers for Intel Merrifield platform */
@ -317,7 +333,9 @@ static int intel_mrfl_mmc_probe_slot(struct sdhci_pci_slot *slot)
static const struct sdhci_pci_fixes sdhci_intel_mrfl_mmc = {
.quirks = SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC,
.quirks2 = SDHCI_QUIRK2_BROKEN_HS200,
.quirks2 = SDHCI_QUIRK2_BROKEN_HS200 |
SDHCI_QUIRK2_PRESET_VALUE_BROKEN,
.allow_runtime_pm = true,
.probe_slot = intel_mrfl_mmc_probe_slot,
};
@ -876,6 +894,29 @@ static const struct pci_device_id pci_ids[] = {
.driver_data = (kernel_ulong_t)&sdhci_intel_byt_emmc,
},
{
.vendor = PCI_VENDOR_ID_INTEL,
.device = PCI_DEVICE_ID_INTEL_BSW_EMMC,
.subvendor = PCI_ANY_ID,
.subdevice = PCI_ANY_ID,
.driver_data = (kernel_ulong_t)&sdhci_intel_byt_emmc,
},
{
.vendor = PCI_VENDOR_ID_INTEL,
.device = PCI_DEVICE_ID_INTEL_BSW_SDIO,
.subvendor = PCI_ANY_ID,
.subdevice = PCI_ANY_ID,
.driver_data = (kernel_ulong_t)&sdhci_intel_byt_sdio,
},
{
.vendor = PCI_VENDOR_ID_INTEL,
.device = PCI_DEVICE_ID_INTEL_BSW_SD,
.subvendor = PCI_ANY_ID,
.subdevice = PCI_ANY_ID,
.driver_data = (kernel_ulong_t)&sdhci_intel_byt_sd,
},
{
.vendor = PCI_VENDOR_ID_INTEL,
@ -1269,20 +1310,13 @@ static int sdhci_pci_runtime_idle(struct device *dev)
return 0;
}
#else
#define sdhci_pci_runtime_suspend NULL
#define sdhci_pci_runtime_resume NULL
#define sdhci_pci_runtime_idle NULL
#endif
static const struct dev_pm_ops sdhci_pci_pm_ops = {
.suspend = sdhci_pci_suspend,
.resume = sdhci_pci_resume,
.runtime_suspend = sdhci_pci_runtime_suspend,
.runtime_resume = sdhci_pci_runtime_resume,
.runtime_idle = sdhci_pci_runtime_idle,
SET_RUNTIME_PM_OPS(sdhci_pci_runtime_suspend,
sdhci_pci_runtime_resume, sdhci_pci_runtime_idle)
};
/*****************************************************************************\
@ -1332,6 +1366,7 @@ static struct sdhci_pci_slot *sdhci_pci_probe_slot(
slot->pci_bar = bar;
slot->rst_n_gpio = -EINVAL;
slot->cd_gpio = -EINVAL;
slot->cd_idx = -1;
/* Retrieve platform data if there is any */
if (*sdhci_pci_get_data)
@ -1390,6 +1425,13 @@ static struct sdhci_pci_slot *sdhci_pci_probe_slot(
host->mmc->slotno = slotno;
host->mmc->caps2 |= MMC_CAP2_NO_PRESCAN_POWERUP;
if (slot->cd_idx >= 0 &&
mmc_gpiod_request_cd(host->mmc, slot->cd_con_id, slot->cd_idx,
slot->cd_override_level, 0, NULL)) {
dev_warn(&pdev->dev, "failed to setup card detect gpio\n");
slot->cd_idx = -1;
}
ret = sdhci_add_host(host);
if (ret)
goto remove;
@ -1402,7 +1444,7 @@ static struct sdhci_pci_slot *sdhci_pci_probe_slot(
* Note sdhci_pci_add_own_cd() sets slot->cd_gpio to -EINVAL on failure.
*/
if (chip->fixes && chip->fixes->own_cd_for_runtime_pm &&
!gpio_is_valid(slot->cd_gpio))
!gpio_is_valid(slot->cd_gpio) && slot->cd_idx < 0)
chip->allow_runtime_pm = false;
return slot;

View file

@ -11,6 +11,9 @@
#define PCI_DEVICE_ID_INTEL_BYT_SDIO 0x0f15
#define PCI_DEVICE_ID_INTEL_BYT_SD 0x0f16
#define PCI_DEVICE_ID_INTEL_BYT_EMMC2 0x0f50
#define PCI_DEVICE_ID_INTEL_BSW_EMMC 0x2294
#define PCI_DEVICE_ID_INTEL_BSW_SDIO 0x2295
#define PCI_DEVICE_ID_INTEL_BSW_SD 0x2296
#define PCI_DEVICE_ID_INTEL_MRFL_MMC 0x1190
#define PCI_DEVICE_ID_INTEL_CLV_SDIO0 0x08f9
#define PCI_DEVICE_ID_INTEL_CLV_SDIO1 0x08fa
@ -61,6 +64,10 @@ struct sdhci_pci_slot {
int cd_gpio;
int cd_irq;
char *cd_con_id;
int cd_idx;
bool cd_override_level;
void (*hw_reset)(struct sdhci_host *host);
};

View file

@ -123,7 +123,6 @@ struct sdhci_host *sdhci_pltfm_init(struct platform_device *pdev,
size_t priv_size)
{
struct sdhci_host *host;
struct device_node *np = pdev->dev.of_node;
struct resource *iomem;
int ret;
@ -136,13 +135,8 @@ struct sdhci_host *sdhci_pltfm_init(struct platform_device *pdev,
if (resource_size(iomem) < 0x100)
dev_err(&pdev->dev, "Invalid iomem size!\n");
/* Some PCI-based MFD need the parent here */
if (pdev->dev.parent != &platform_bus && !np)
host = sdhci_alloc_host(pdev->dev.parent,
sizeof(struct sdhci_pltfm_host) + priv_size);
else
host = sdhci_alloc_host(&pdev->dev,
sizeof(struct sdhci_pltfm_host) + priv_size);
host = sdhci_alloc_host(&pdev->dev,
sizeof(struct sdhci_pltfm_host) + priv_size);
if (IS_ERR(host)) {
ret = PTR_ERR(host);

View file

@ -261,7 +261,6 @@ static int sdhci_pxav2_remove(struct platform_device *pdev)
static struct platform_driver sdhci_pxav2_driver = {
.driver = {
.name = "sdhci-pxav2",
.owner = THIS_MODULE,
#ifdef CONFIG_OF
.of_match_table = sdhci_pxav2_of_match,
#endif

View file

@ -224,12 +224,11 @@ static void pxav3_set_uhs_signaling(struct sdhci_host *host, unsigned int uhs)
static const struct sdhci_ops pxav3_sdhci_ops = {
.set_clock = sdhci_set_clock,
.set_uhs_signaling = pxav3_set_uhs_signaling,
.platform_send_init_74_clocks = pxav3_gen_init_74_clocks,
.get_max_clock = sdhci_pltfm_clk_get_max_clock,
.set_bus_width = sdhci_set_bus_width,
.reset = pxav3_reset,
.set_uhs_signaling = sdhci_set_uhs_signaling,
.set_uhs_signaling = pxav3_set_uhs_signaling,
};
static struct sdhci_pltfm_data sdhci_pxav3_pdata = {
@ -381,11 +380,11 @@ static int sdhci_pxav3_probe(struct platform_device *pdev)
return 0;
err_of_parse:
err_cd_req:
err_add_host:
pm_runtime_put_sync(&pdev->dev);
pm_runtime_disable(&pdev->dev);
err_of_parse:
err_cd_req:
clk_disable_unprepare(clk);
err_clk_get:
err_mbus_win:
@ -492,7 +491,6 @@ static struct platform_driver sdhci_pxav3_driver = {
#ifdef CONFIG_OF
.of_match_table = sdhci_pxav3_of_match,
#endif
.owner = THIS_MODULE,
.pm = SDHCI_PXAV3_PMOPS,
},
.probe = sdhci_pxav3_probe,

View file

@ -606,8 +606,6 @@ static int sdhci_s3c_probe(struct platform_device *pdev)
ret = sdhci_add_host(host);
if (ret) {
dev_err(dev, "sdhci_add_host() failed\n");
pm_runtime_forbid(&pdev->dev);
pm_runtime_get_noresume(&pdev->dev);
goto err_req_regs;
}
@ -618,6 +616,8 @@ static int sdhci_s3c_probe(struct platform_device *pdev)
return 0;
err_req_regs:
pm_runtime_disable(&pdev->dev);
err_no_busclks:
clk_disable_unprepare(sc->clk_io);
@ -747,7 +747,6 @@ static struct platform_driver sdhci_s3c_driver = {
.remove = sdhci_s3c_remove,
.id_table = sdhci_s3c_driver_ids,
.driver = {
.owner = THIS_MODULE,
.name = "s3c-sdhci",
.of_match_table = of_match_ptr(sdhci_s3c_dt_match),
.pm = SDHCI_S3C_PMOPS,

View file

@ -15,6 +15,8 @@
#include <linux/mmc/slot-gpio.h>
#include "sdhci-pltfm.h"
#define SDHCI_SIRF_8BITBUS BIT(3)
struct sdhci_sirf_priv {
struct clk *clk;
int gpio_cd;
@ -27,10 +29,30 @@ static unsigned int sdhci_sirf_get_max_clk(struct sdhci_host *host)
return clk_get_rate(priv->clk);
}
static void sdhci_sirf_set_bus_width(struct sdhci_host *host, int width)
{
u8 ctrl;
ctrl = sdhci_readb(host, SDHCI_HOST_CONTROL);
ctrl &= ~(SDHCI_CTRL_4BITBUS | SDHCI_SIRF_8BITBUS);
/*
* CSR atlas7 and prima2 SD host version is not 3.0
* 8bit-width enable bit of CSR SD hosts is 3,
* while stardard hosts use bit 5
*/
if (width == MMC_BUS_WIDTH_8)
ctrl |= SDHCI_SIRF_8BITBUS;
else if (width == MMC_BUS_WIDTH_4)
ctrl |= SDHCI_CTRL_4BITBUS;
sdhci_writeb(host, ctrl, SDHCI_HOST_CONTROL);
}
static struct sdhci_ops sdhci_sirf_ops = {
.set_clock = sdhci_set_clock,
.get_max_clock = sdhci_sirf_get_max_clk,
.set_bus_width = sdhci_set_bus_width,
.set_bus_width = sdhci_sirf_set_bus_width,
.reset = sdhci_reset,
.set_uhs_signaling = sdhci_set_uhs_signaling,
};
@ -94,6 +116,7 @@ static int sdhci_sirf_probe(struct platform_device *pdev)
ret);
goto err_request_cd;
}
mmc_gpiod_request_cd_irq(host->mmc);
}
return 0;
@ -167,7 +190,6 @@ MODULE_DEVICE_TABLE(of, sdhci_sirf_of_match);
static struct platform_driver sdhci_sirf_driver = {
.driver = {
.name = "sdhci-sirf",
.owner = THIS_MODULE,
.of_match_table = sdhci_sirf_of_match,
#ifdef CONFIG_PM_SLEEP
.pm = &sdhci_sirf_pm_ops,

View file

@ -230,7 +230,6 @@ MODULE_DEVICE_TABLE(of, sdhci_spear_id_table);
static struct platform_driver sdhci_driver = {
.driver = {
.name = "sdhci",
.owner = THIS_MODULE,
.pm = &sdhci_pm_ops,
.of_match_table = of_match_ptr(sdhci_spear_id_table),
},

View file

@ -318,7 +318,6 @@ static int sdhci_tegra_remove(struct platform_device *pdev)
static struct platform_driver sdhci_tegra_driver = {
.driver = {
.name = "sdhci-tegra",
.owner = THIS_MODULE,
.of_match_table = sdhci_tegra_dt_match,
.pm = SDHCI_PLTFM_PMOPS,
},

View file

@ -707,19 +707,28 @@ static void sdhci_set_transfer_irqs(struct sdhci_host *host)
sdhci_writel(host, host->ier, SDHCI_SIGNAL_ENABLE);
}
static void sdhci_prepare_data(struct sdhci_host *host, struct mmc_command *cmd)
static void sdhci_set_timeout(struct sdhci_host *host, struct mmc_command *cmd)
{
u8 count;
if (host->ops->set_timeout) {
host->ops->set_timeout(host, cmd);
} else {
count = sdhci_calc_timeout(host, cmd);
sdhci_writeb(host, count, SDHCI_TIMEOUT_CONTROL);
}
}
static void sdhci_prepare_data(struct sdhci_host *host, struct mmc_command *cmd)
{
u8 ctrl;
struct mmc_data *data = cmd->data;
int ret;
WARN_ON(host->data);
if (data || (cmd->flags & MMC_RSP_BUSY)) {
count = sdhci_calc_timeout(host, cmd);
sdhci_writeb(host, count, SDHCI_TIMEOUT_CONTROL);
}
if (data || (cmd->flags & MMC_RSP_BUSY))
sdhci_set_timeout(host, cmd);
if (!data)
return;
@ -1007,6 +1016,7 @@ void sdhci_send_command(struct sdhci_host *host, struct mmc_command *cmd)
mod_timer(&host->timer, timeout);
host->cmd = cmd;
host->busy_handle = 0;
sdhci_prepare_data(host, cmd);
@ -1194,7 +1204,6 @@ void sdhci_set_clock(struct sdhci_host *host, unsigned int clock)
clock_set:
if (real_div)
host->mmc->actual_clock = (host->max_clk * clk_mul) / real_div;
clk |= (div & SDHCI_DIV_MASK) << SDHCI_DIVIDER_SHIFT;
clk |= ((div & SDHCI_DIV_HI_MASK) >> SDHCI_DIV_MASK_LEN)
<< SDHCI_DIVIDER_HI_SHIFT;
@ -1357,11 +1366,12 @@ static void sdhci_request(struct mmc_host *mmc, struct mmc_request *mrq)
present_state = sdhci_readl(host, SDHCI_PRESENT_STATE);
/*
* Check if the re-tuning timer has already expired and there
* is no on-going data transfer. If so, we need to execute
* tuning procedure before sending command.
* is no on-going data transfer and DAT0 is not busy. If so,
* we need to execute tuning procedure before sending command.
*/
if ((host->flags & SDHCI_NEEDS_RETUNING) &&
!(present_state & (SDHCI_DOING_WRITE | SDHCI_DOING_READ))) {
!(present_state & (SDHCI_DOING_WRITE | SDHCI_DOING_READ)) &&
(present_state & SDHCI_DATA_0_LVL_MASK)) {
if (mmc->card) {
/* eMMC uses cmd21 but sd and sdio use cmd19 */
tuning_opcode =
@ -1471,6 +1481,18 @@ static void sdhci_do_set_ios(struct sdhci_host *host, struct mmc_ios *ios)
if (!ios->clock || ios->clock != host->clock) {
host->ops->set_clock(host, ios->clock);
host->clock = ios->clock;
if (host->quirks & SDHCI_QUIRK_DATA_TIMEOUT_USES_SDCLK &&
host->clock) {
host->timeout_clk = host->mmc->actual_clock ?
host->mmc->actual_clock / 1000 :
host->clock / 1000;
host->mmc->max_busy_timeout =
host->ops->get_max_timeout_count ?
host->ops->get_max_timeout_count(host) :
1 << 27;
host->mmc->max_busy_timeout /= host->timeout_clk;
}
}
sdhci_set_power(host, ios->power_mode, ios->vdd);
@ -1733,8 +1755,8 @@ static int sdhci_do_start_signal_voltage_switch(struct sdhci_host *host,
ret = regulator_set_voltage(mmc->supply.vqmmc, 2700000,
3600000);
if (ret) {
pr_warning("%s: Switching to 3.3V signalling voltage "
" failed\n", mmc_hostname(mmc));
pr_warn("%s: Switching to 3.3V signalling voltage failed\n",
mmc_hostname(mmc));
return -EIO;
}
}
@ -1746,8 +1768,8 @@ static int sdhci_do_start_signal_voltage_switch(struct sdhci_host *host,
if (!(ctrl & SDHCI_CTRL_VDD_180))
return 0;
pr_warning("%s: 3.3V regulator output did not became stable\n",
mmc_hostname(mmc));
pr_warn("%s: 3.3V regulator output did not became stable\n",
mmc_hostname(mmc));
return -EAGAIN;
case MMC_SIGNAL_VOLTAGE_180:
@ -1755,8 +1777,8 @@ static int sdhci_do_start_signal_voltage_switch(struct sdhci_host *host,
ret = regulator_set_voltage(mmc->supply.vqmmc,
1700000, 1950000);
if (ret) {
pr_warning("%s: Switching to 1.8V signalling voltage "
" failed\n", mmc_hostname(mmc));
pr_warn("%s: Switching to 1.8V signalling voltage failed\n",
mmc_hostname(mmc));
return -EIO;
}
}
@ -1773,8 +1795,8 @@ static int sdhci_do_start_signal_voltage_switch(struct sdhci_host *host,
if (ctrl & SDHCI_CTRL_VDD_180)
return 0;
pr_warning("%s: 1.8V regulator output did not became stable\n",
mmc_hostname(mmc));
pr_warn("%s: 1.8V regulator output did not became stable\n",
mmc_hostname(mmc));
return -EAGAIN;
case MMC_SIGNAL_VOLTAGE_120:
@ -1782,8 +1804,8 @@ static int sdhci_do_start_signal_voltage_switch(struct sdhci_host *host,
ret = regulator_set_voltage(mmc->supply.vqmmc, 1100000,
1300000);
if (ret) {
pr_warning("%s: Switching to 1.2V signalling voltage "
" failed\n", mmc_hostname(mmc));
pr_warn("%s: Switching to 1.2V signalling voltage failed\n",
mmc_hostname(mmc));
return -EIO;
}
}
@ -2203,7 +2225,7 @@ static void sdhci_tuning_timer(unsigned long data)
* *
\*****************************************************************************/
static void sdhci_cmd_irq(struct sdhci_host *host, u32 intmask)
static void sdhci_cmd_irq(struct sdhci_host *host, u32 intmask, u32 *mask)
{
BUG_ON(intmask == 0);
@ -2241,11 +2263,18 @@ static void sdhci_cmd_irq(struct sdhci_host *host, u32 intmask)
if (host->cmd->data)
DBG("Cannot wait for busy signal when also "
"doing a data transfer");
else if (!(host->quirks & SDHCI_QUIRK_NO_BUSY_IRQ))
else if (!(host->quirks & SDHCI_QUIRK_NO_BUSY_IRQ)
&& !host->busy_handle) {
/* Mark that command complete before busy is ended */
host->busy_handle = 1;
return;
}
/* The controller does not support the end-of-busy IRQ,
* fall through and take the SDHCI_INT_RESPONSE */
} else if ((host->quirks2 & SDHCI_QUIRK2_STOP_WITH_TC) &&
host->cmd->opcode == MMC_STOP_TRANSMISSION && !host->data) {
*mask &= ~SDHCI_INT_DATA_END;
}
if (intmask & SDHCI_INT_RESPONSE)
@ -2304,8 +2333,21 @@ static void sdhci_data_irq(struct sdhci_host *host, u32 intmask)
* above in sdhci_cmd_irq().
*/
if (host->cmd && (host->cmd->flags & MMC_RSP_BUSY)) {
if (intmask & SDHCI_INT_DATA_TIMEOUT) {
host->cmd->error = -ETIMEDOUT;
tasklet_schedule(&host->finish_tasklet);
return;
}
if (intmask & SDHCI_INT_DATA_END) {
sdhci_finish_command(host);
/*
* Some cards handle busy-end interrupt
* before the command completed, so make
* sure we do things in the proper order.
*/
if (host->busy_handle)
sdhci_finish_command(host);
else
host->busy_handle = 1;
return;
}
}
@ -2442,7 +2484,8 @@ static irqreturn_t sdhci_irq(int irq, void *dev_id)
}
if (intmask & SDHCI_INT_CMD_MASK)
sdhci_cmd_irq(host, intmask & SDHCI_INT_CMD_MASK);
sdhci_cmd_irq(host, intmask & SDHCI_INT_CMD_MASK,
&intmask);
if (intmask & SDHCI_INT_DATA_MASK)
sdhci_data_irq(host, intmask & SDHCI_INT_DATA_MASK);
@ -2534,7 +2577,7 @@ void sdhci_enable_irq_wakeups(struct sdhci_host *host)
}
EXPORT_SYMBOL_GPL(sdhci_enable_irq_wakeups);
void sdhci_disable_irq_wakeups(struct sdhci_host *host)
static void sdhci_disable_irq_wakeups(struct sdhci_host *host)
{
u8 val;
u8 mask = SDHCI_WAKE_ON_INSERT | SDHCI_WAKE_ON_REMOVE
@ -2544,7 +2587,6 @@ void sdhci_disable_irq_wakeups(struct sdhci_host *host)
val &= ~mask;
sdhci_writeb(host, val, SDHCI_WAKE_UP_CONTROL);
}
EXPORT_SYMBOL_GPL(sdhci_disable_irq_wakeups);
int sdhci_suspend_host(struct sdhci_host *host)
{
@ -2749,6 +2791,7 @@ int sdhci_add_host(struct sdhci_host *host)
u32 caps[2] = {0, 0};
u32 max_current_caps;
unsigned int ocr_avail;
unsigned int override_timeout_clk;
int ret;
WARN_ON(host == NULL);
@ -2762,6 +2805,8 @@ int sdhci_add_host(struct sdhci_host *host)
if (debug_quirks2)
host->quirks2 = debug_quirks2;
override_timeout_clk = host->timeout_clk;
sdhci_do_reset(host, SDHCI_RESET_ALL);
host->version = sdhci_readw(host, SDHCI_HOST_VERSION);
@ -2807,8 +2852,7 @@ int sdhci_add_host(struct sdhci_host *host)
if (host->flags & (SDHCI_USE_SDMA | SDHCI_USE_ADMA)) {
if (host->ops->enable_dma) {
if (host->ops->enable_dma(host)) {
pr_warning("%s: No suitable DMA "
"available. Falling back to PIO.\n",
pr_warn("%s: No suitable DMA available - falling back to PIO\n",
mmc_hostname(mmc));
host->flags &=
~(SDHCI_USE_SDMA | SDHCI_USE_ADMA);
@ -2830,15 +2874,14 @@ int sdhci_add_host(struct sdhci_host *host)
dma_free_coherent(mmc_dev(mmc), ADMA_SIZE,
host->adma_desc, host->adma_addr);
kfree(host->align_buffer);
pr_warning("%s: Unable to allocate ADMA "
"buffers. Falling back to standard DMA.\n",
pr_warn("%s: Unable to allocate ADMA buffers - falling back to standard DMA\n",
mmc_hostname(mmc));
host->flags &= ~SDHCI_USE_ADMA;
host->adma_desc = NULL;
host->align_buffer = NULL;
} else if (host->adma_addr & 3) {
pr_warning("%s: unable to allocate aligned ADMA descriptor\n",
mmc_hostname(mmc));
pr_warn("%s: unable to allocate aligned ADMA descriptor\n",
mmc_hostname(mmc));
host->flags &= ~SDHCI_USE_ADMA;
dma_free_coherent(mmc_dev(mmc), ADMA_SIZE,
host->adma_desc, host->adma_addr);
@ -2908,25 +2951,30 @@ int sdhci_add_host(struct sdhci_host *host)
} else
mmc->f_min = host->max_clk / SDHCI_MAX_DIV_SPEC_200;
host->timeout_clk =
(caps[0] & SDHCI_TIMEOUT_CLK_MASK) >> SDHCI_TIMEOUT_CLK_SHIFT;
if (host->timeout_clk == 0) {
if (host->ops->get_timeout_clock) {
host->timeout_clk = host->ops->get_timeout_clock(host);
} else if (!(host->quirks &
SDHCI_QUIRK_DATA_TIMEOUT_USES_SDCLK)) {
pr_err("%s: Hardware doesn't specify timeout clock "
"frequency.\n", mmc_hostname(mmc));
return -ENODEV;
if (!(host->quirks & SDHCI_QUIRK_DATA_TIMEOUT_USES_SDCLK)) {
host->timeout_clk = (caps[0] & SDHCI_TIMEOUT_CLK_MASK) >>
SDHCI_TIMEOUT_CLK_SHIFT;
if (host->timeout_clk == 0) {
if (host->ops->get_timeout_clock) {
host->timeout_clk =
host->ops->get_timeout_clock(host);
} else {
pr_err("%s: Hardware doesn't specify timeout clock frequency.\n",
mmc_hostname(mmc));
return -ENODEV;
}
}
if (caps[0] & SDHCI_TIMEOUT_CLK_UNIT)
host->timeout_clk *= 1000;
mmc->max_busy_timeout = host->ops->get_max_timeout_count ?
host->ops->get_max_timeout_count(host) : 1 << 27;
mmc->max_busy_timeout /= host->timeout_clk;
}
if (caps[0] & SDHCI_TIMEOUT_CLK_UNIT)
host->timeout_clk *= 1000;
if (host->quirks & SDHCI_QUIRK_DATA_TIMEOUT_USES_SDCLK)
host->timeout_clk = mmc->f_max / 1000;
mmc->max_busy_timeout = (1 << 27) / host->timeout_clk;
if (override_timeout_clk)
host->timeout_clk = override_timeout_clk;
mmc->caps |= MMC_CAP_SDIO_IRQ | MMC_CAP_ERASE | MMC_CAP_CMD23;
mmc->caps2 |= MMC_CAP2_SDIO_IRQ_NOTHREAD;
@ -2998,8 +3046,13 @@ int sdhci_add_host(struct sdhci_host *host)
/* SD3.0: SDR104 is supported so (for eMMC) the caps2
* field can be promoted to support HS200.
*/
if (!(host->quirks2 & SDHCI_QUIRK2_BROKEN_HS200))
if (!(host->quirks2 & SDHCI_QUIRK2_BROKEN_HS200)) {
mmc->caps2 |= MMC_CAP2_HS200;
if (IS_ERR(mmc->supply.vqmmc) ||
!regulator_is_supported_voltage
(mmc->supply.vqmmc, 1100000, 1300000))
mmc->caps2 &= ~MMC_CAP2_HS200_1_2V_SDR;
}
} else if (caps[1] & SDHCI_SUPPORT_SDR50)
mmc->caps |= MMC_CAP_UHS_SDR50;
@ -3049,7 +3102,7 @@ int sdhci_add_host(struct sdhci_host *host)
*/
max_current_caps = sdhci_readl(host, SDHCI_MAX_CURRENT);
if (!max_current_caps && !IS_ERR(mmc->supply.vmmc)) {
u32 curr = regulator_get_current_limit(mmc->supply.vmmc);
int curr = regulator_get_current_limit(mmc->supply.vmmc);
if (curr > 0) {
/* convert to SDHCI_MAX_CURRENT format */
@ -3158,8 +3211,8 @@ int sdhci_add_host(struct sdhci_host *host)
mmc->max_blk_size = (caps[0] & SDHCI_MAX_BLOCK_MASK) >>
SDHCI_MAX_BLOCK_SHIFT;
if (mmc->max_blk_size >= 3) {
pr_warning("%s: Invalid maximum block size, "
"assuming 512 bytes\n", mmc_hostname(mmc));
pr_warn("%s: Invalid maximum block size, assuming 512 bytes\n",
mmc_hostname(mmc));
mmc->max_blk_size = 0;
}
}

View file

@ -72,6 +72,7 @@
#define SDHCI_WRITE_PROTECT 0x00080000
#define SDHCI_DATA_LVL_MASK 0x00F00000
#define SDHCI_DATA_LVL_SHIFT 20
#define SDHCI_DATA_0_LVL_MASK 0x00100000
#define SDHCI_HOST_CONTROL 0x28
#define SDHCI_CTRL_LED 0x01
@ -281,6 +282,9 @@ struct sdhci_ops {
unsigned int (*get_max_clock)(struct sdhci_host *host);
unsigned int (*get_min_clock)(struct sdhci_host *host);
unsigned int (*get_timeout_clock)(struct sdhci_host *host);
unsigned int (*get_max_timeout_count)(struct sdhci_host *host);
void (*set_timeout)(struct sdhci_host *host,
struct mmc_command *cmd);
void (*set_bus_width)(struct sdhci_host *host, int width);
void (*platform_send_init_74_clocks)(struct sdhci_host *host,
u8 power_mode);

View file

@ -1553,7 +1553,6 @@ static struct platform_driver sh_mmcif_driver = {
.driver = {
.name = DRIVER_NAME,
.pm = &sh_mmcif_dev_pm_ops,
.owner = THIS_MODULE,
.of_match_table = mmcif_of_match,
},
};

View file

@ -39,6 +39,7 @@ struct sh_mobile_sdhi_of_data {
unsigned long tmio_flags;
unsigned long capabilities;
unsigned long capabilities2;
dma_addr_t dma_rx_offset;
};
static const struct sh_mobile_sdhi_of_data sh_mobile_sdhi_of_cfg[] = {
@ -48,14 +49,16 @@ static const struct sh_mobile_sdhi_of_data sh_mobile_sdhi_of_cfg[] = {
};
static const struct sh_mobile_sdhi_of_data of_rcar_gen1_compatible = {
.tmio_flags = TMIO_MMC_HAS_IDLE_WAIT | TMIO_MMC_WRPROTECT_DISABLE,
.tmio_flags = TMIO_MMC_HAS_IDLE_WAIT | TMIO_MMC_WRPROTECT_DISABLE |
TMIO_MMC_CLK_ACTUAL,
.capabilities = MMC_CAP_SD_HIGHSPEED | MMC_CAP_SDIO_IRQ,
};
static const struct sh_mobile_sdhi_of_data of_rcar_gen2_compatible = {
.tmio_flags = TMIO_MMC_HAS_IDLE_WAIT | TMIO_MMC_WRPROTECT_DISABLE,
.tmio_flags = TMIO_MMC_HAS_IDLE_WAIT | TMIO_MMC_WRPROTECT_DISABLE |
TMIO_MMC_CLK_ACTUAL,
.capabilities = MMC_CAP_SD_HIGHSPEED | MMC_CAP_SDIO_IRQ,
.capabilities2 = MMC_CAP2_NO_MULTI_READ,
.dma_rx_offset = 0x2000,
};
static const struct of_device_id sh_mobile_sdhi_of_match[] = {
@ -68,6 +71,9 @@ static const struct of_device_id sh_mobile_sdhi_of_match[] = {
{ .compatible = "renesas,sdhi-r8a7779", .data = &of_rcar_gen1_compatible, },
{ .compatible = "renesas,sdhi-r8a7790", .data = &of_rcar_gen2_compatible, },
{ .compatible = "renesas,sdhi-r8a7791", .data = &of_rcar_gen2_compatible, },
{ .compatible = "renesas,sdhi-r8a7792", .data = &of_rcar_gen2_compatible, },
{ .compatible = "renesas,sdhi-r8a7793", .data = &of_rcar_gen2_compatible, },
{ .compatible = "renesas,sdhi-r8a7794", .data = &of_rcar_gen2_compatible, },
{},
};
MODULE_DEVICE_TABLE(of, sh_mobile_sdhi_of_match);
@ -132,6 +138,24 @@ static int sh_mobile_sdhi_write16_hook(struct tmio_mmc_host *host, int addr)
return 0;
}
static int sh_mobile_sdhi_multi_io_quirk(struct mmc_card *card,
unsigned int direction, int blk_size)
{
/*
* In Renesas controllers, when performing a
* multiple block read of one or two blocks,
* depending on the timing with which the
* response register is read, the response
* value may not be read properly.
* Use single block read for this HW bug
*/
if ((direction == MMC_DATA_READ) &&
blk_size == 2)
return 1;
return blk_size;
}
static void sh_mobile_sdhi_cd_wakeup(const struct platform_device *pdev)
{
mmc_detect_change(platform_get_drvdata(pdev), msecs_to_jiffies(100));
@ -187,6 +211,7 @@ static int sh_mobile_sdhi_probe(struct platform_device *pdev)
mmc_data->clk_disable = sh_mobile_sdhi_clk_disable;
mmc_data->capabilities = MMC_CAP_MMC_HIGHSPEED;
mmc_data->write16_hook = sh_mobile_sdhi_write16_hook;
mmc_data->multi_io_quirk = sh_mobile_sdhi_multi_io_quirk;
if (p) {
mmc_data->flags = p->tmio_flags;
mmc_data->ocr_mask = p->tmio_ocr_mask;
@ -223,11 +248,27 @@ static int sh_mobile_sdhi_probe(struct platform_device *pdev)
*/
mmc_data->flags |= TMIO_MMC_SDIO_IRQ;
/*
* All SDHI have CMD12 controll bit
*/
mmc_data->flags |= TMIO_MMC_HAVE_CMD12_CTRL;
/*
* All SDHI need SDIO_INFO1 reserved bit
*/
mmc_data->flags |= TMIO_MMC_SDIO_STATUS_QUIRK;
/*
* All SDHI have DMA control register
*/
mmc_data->flags |= TMIO_MMC_HAVE_CTL_DMA_REG;
if (of_id && of_id->data) {
const struct sh_mobile_sdhi_of_data *of_data = of_id->data;
mmc_data->flags |= of_data->tmio_flags;
mmc_data->capabilities |= of_data->capabilities;
mmc_data->capabilities2 |= of_data->capabilities2;
dma_priv->dma_rx_offset = of_data->dma_rx_offset;
}
/* SD control register space size is 0x100, 0x200 for bus_shift=1 */
@ -332,8 +373,9 @@ static int sh_mobile_sdhi_remove(struct platform_device *pdev)
}
static const struct dev_pm_ops tmio_mmc_dev_pm_ops = {
SET_SYSTEM_SLEEP_PM_OPS(tmio_mmc_host_suspend, tmio_mmc_host_resume)
SET_RUNTIME_PM_OPS(tmio_mmc_host_runtime_suspend,
SET_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend,
pm_runtime_force_resume)
SET_PM_RUNTIME_PM_OPS(tmio_mmc_host_runtime_suspend,
tmio_mmc_host_runtime_resume,
NULL)
};
@ -341,7 +383,6 @@ static const struct dev_pm_ops tmio_mmc_dev_pm_ops = {
static struct platform_driver sh_mobile_sdhi_driver = {
.driver = {
.name = "sh_mobile_sdhi",
.owner = THIS_MODULE,
.pm = &tmio_mmc_dev_pm_ops,
.of_match_table = sh_mobile_sdhi_of_match,
},

View file

@ -990,7 +990,8 @@ static int sunxi_mmc_probe(struct platform_device *pdev)
/* 400kHz ~ 50MHz */
mmc->f_min = 400000;
mmc->f_max = 50000000;
mmc->caps |= MMC_CAP_MMC_HIGHSPEED | MMC_CAP_SD_HIGHSPEED;
mmc->caps |= MMC_CAP_MMC_HIGHSPEED | MMC_CAP_SD_HIGHSPEED |
MMC_CAP_ERASE;
ret = mmc_of_parse(mmc);
if (ret)
@ -1035,7 +1036,6 @@ static int sunxi_mmc_remove(struct platform_device *pdev)
static struct platform_driver sunxi_mmc_driver = {
.driver = {
.name = "sunxi-mmc",
.owner = THIS_MODULE,
.of_match_table = of_match_ptr(sunxi_mmc_of_match),
},
.probe = sunxi_mmc_probe,

View file

@ -952,8 +952,8 @@ static int tifm_sd_probe(struct tifm_dev *sock)
if (!(TIFM_SOCK_STATE_OCCUPIED
& readl(sock->addr + SOCK_PRESENT_STATE))) {
pr_warning("%s : card gone, unexpectedly\n",
dev_name(&sock->dev));
pr_warn("%s : card gone, unexpectedly\n",
dev_name(&sock->dev));
return rc;
}

View file

@ -30,7 +30,7 @@ static int tmio_mmc_suspend(struct device *dev)
const struct mfd_cell *cell = mfd_get_cell(pdev);
int ret;
ret = tmio_mmc_host_suspend(dev);
ret = pm_runtime_force_suspend(dev);
/* Tell MFD core it can disable us now.*/
if (!ret && cell->disable)
@ -50,7 +50,7 @@ static int tmio_mmc_resume(struct device *dev)
ret = cell->resume(pdev);
if (!ret)
ret = tmio_mmc_host_resume(dev);
ret = pm_runtime_force_resume(dev);
return ret;
}
@ -135,6 +135,9 @@ static int tmio_mmc_remove(struct platform_device *pdev)
static const struct dev_pm_ops tmio_mmc_dev_pm_ops = {
SET_SYSTEM_SLEEP_PM_OPS(tmio_mmc_suspend, tmio_mmc_resume)
SET_PM_RUNTIME_PM_OPS(tmio_mmc_host_runtime_suspend,
tmio_mmc_host_runtime_resume,
NULL)
};
static struct platform_driver tmio_mmc_driver = {

View file

@ -40,22 +40,6 @@
struct tmio_mmc_data;
/*
* We differentiate between the following 3 power states:
* 1. card slot powered off, controller stopped. This is used, when either there
* is no card in the slot, or the card really has to be powered down.
* 2. card slot powered on, controller stopped. This is used, when a card is in
* the slot, but no activity is currently taking place. This is a power-
* saving mode with card-state preserved. This state can be entered, e.g.
* when MMC clock-gating is used.
* 3. card slot powered on, controller running. This is the actual active state.
*/
enum tmio_mmc_power {
TMIO_MMC_OFF_STOP, /* card power off, controller stopped */
TMIO_MMC_ON_STOP, /* card power on, controller stopped */
TMIO_MMC_ON_RUN, /* card power on, controller running */
};
struct tmio_mmc_host {
void __iomem *ctl;
struct mmc_command *cmd;
@ -63,9 +47,6 @@ struct tmio_mmc_host {
struct mmc_data *data;
struct mmc_host *mmc;
/* Controller and card power state */
enum tmio_mmc_power power;
/* Callbacks for clock / power control */
void (*set_pwr)(struct platform_device *host, int state);
void (*set_clk_div)(struct platform_device *host, int state);
@ -92,15 +73,16 @@ struct tmio_mmc_host {
struct delayed_work delayed_reset_work;
struct work_struct done;
/* Cache IRQ mask */
/* Cache */
u32 sdcard_irq_mask;
u32 sdio_irq_mask;
unsigned int clk_cache;
spinlock_t lock; /* protect host private data */
unsigned long last_req_ts;
struct mutex ios_lock; /* protect set_ios() context */
bool native_hotplug;
bool resuming;
bool sdio_irq_enabled;
};
int tmio_mmc_host_probe(struct tmio_mmc_host **host,
@ -162,12 +144,7 @@ static inline void tmio_mmc_abort_dma(struct tmio_mmc_host *host)
}
#endif
#ifdef CONFIG_PM_SLEEP
int tmio_mmc_host_suspend(struct device *dev);
int tmio_mmc_host_resume(struct device *dev);
#endif
#ifdef CONFIG_PM_RUNTIME
#ifdef CONFIG_PM
int tmio_mmc_host_runtime_suspend(struct device *dev);
int tmio_mmc_host_runtime_resume(struct device *dev);
#endif

View file

@ -28,10 +28,8 @@ void tmio_mmc_enable_dma(struct tmio_mmc_host *host, bool enable)
if (!host->chan_tx || !host->chan_rx)
return;
#if defined(CONFIG_SUPERH) || defined(CONFIG_ARCH_SHMOBILE)
/* Switch DMA mode on or off - SuperH specific? */
sd_ctrl_write16(host, CTL_DMA_ENABLE, enable ? 2 : 0);
#endif
if (host->pdata->flags & TMIO_MMC_HAVE_CTL_DMA_REG)
sd_ctrl_write16(host, CTL_DMA_ENABLE, enable ? 2 : 0);
}
void tmio_mmc_abort_dma(struct tmio_mmc_host *host)
@ -312,7 +310,7 @@ void tmio_mmc_request_dma(struct tmio_mmc_host *host, struct tmio_mmc_data *pdat
if (pdata->dma->chan_priv_rx)
cfg.slave_id = pdata->dma->slave_id_rx;
cfg.direction = DMA_DEV_TO_MEM;
cfg.src_addr = cfg.dst_addr;
cfg.src_addr = cfg.dst_addr + pdata->dma->dma_rx_offset;
cfg.src_addr_width = DMA_SLAVE_BUSWIDTH_2_BYTES;
cfg.dst_addr = 0;
ret = dmaengine_slave_config(host->chan_rx, &cfg);

View file

@ -44,6 +44,7 @@
#include <linux/pm_qos.h>
#include <linux/pm_runtime.h>
#include <linux/regulator/consumer.h>
#include <linux/mmc/sdio.h>
#include <linux/scatterlist.h>
#include <linux/spinlock.h>
#include <linux/workqueue.h>
@ -129,19 +130,28 @@ static void tmio_mmc_enable_sdio_irq(struct mmc_host *mmc, int enable)
{
struct tmio_mmc_host *host = mmc_priv(mmc);
if (enable) {
if (enable && !host->sdio_irq_enabled) {
/* Keep device active while SDIO irq is enabled */
pm_runtime_get_sync(mmc_dev(mmc));
host->sdio_irq_enabled = true;
host->sdio_irq_mask = TMIO_SDIO_MASK_ALL &
~TMIO_SDIO_STAT_IOIRQ;
sd_ctrl_write16(host, CTL_TRANSACTION_CTL, 0x0001);
sd_ctrl_write16(host, CTL_SDIO_IRQ_MASK, host->sdio_irq_mask);
} else {
} else if (!enable && host->sdio_irq_enabled) {
host->sdio_irq_mask = TMIO_SDIO_MASK_ALL;
sd_ctrl_write16(host, CTL_SDIO_IRQ_MASK, host->sdio_irq_mask);
sd_ctrl_write16(host, CTL_TRANSACTION_CTL, 0x0000);
host->sdio_irq_enabled = false;
pm_runtime_mark_last_busy(mmc_dev(mmc));
pm_runtime_put_autosuspend(mmc_dev(mmc));
}
}
static void tmio_mmc_set_clock(struct tmio_mmc_host *host, int new_clock)
static void tmio_mmc_set_clock(struct tmio_mmc_host *host,
unsigned int new_clock)
{
u32 clk = 0, clock;
@ -149,7 +159,11 @@ static void tmio_mmc_set_clock(struct tmio_mmc_host *host, int new_clock)
for (clock = host->mmc->f_min, clk = 0x80000080;
new_clock >= (clock<<1); clk >>= 1)
clock <<= 1;
clk |= 0x100;
/* 1/1 clock is option */
if ((host->pdata->flags & TMIO_MMC_CLK_ACTUAL) &&
((clk >> 22) & 0x1))
clk |= 0xff;
}
if (host->set_clk_div)
@ -245,6 +259,9 @@ static void tmio_mmc_reset_work(struct work_struct *work)
tmio_mmc_abort_dma(host);
mmc_request_done(host->mmc, mrq);
pm_runtime_mark_last_busy(mmc_dev(host->mmc));
pm_runtime_put_autosuspend(mmc_dev(host->mmc));
}
/* called with host->lock held, interrupts disabled */
@ -274,6 +291,9 @@ static void tmio_mmc_finish_request(struct tmio_mmc_host *host)
tmio_mmc_abort_dma(host);
mmc_request_done(host->mmc, mrq);
pm_runtime_mark_last_busy(mmc_dev(host->mmc));
pm_runtime_put_autosuspend(mmc_dev(host->mmc));
}
static void tmio_mmc_done_work(struct work_struct *work)
@ -295,6 +315,7 @@ static void tmio_mmc_done_work(struct work_struct *work)
#define TRANSFER_READ 0x1000
#define TRANSFER_MULTI 0x2000
#define SECURITY_CMD 0x4000
#define NO_CMD12_ISSUE 0x4000 /* TMIO_MMC_HAVE_CMD12_CTRL */
static int tmio_mmc_start_command(struct tmio_mmc_host *host, struct mmc_command *cmd)
{
@ -331,6 +352,14 @@ static int tmio_mmc_start_command(struct tmio_mmc_host *host, struct mmc_command
if (data->blocks > 1) {
sd_ctrl_write16(host, CTL_STOP_INTERNAL_ACTION, 0x100);
c |= TRANSFER_MULTI;
/*
* Disable auto CMD12 at IO_RW_EXTENDED when
* multiple block transfer
*/
if ((host->pdata->flags & TMIO_MMC_HAVE_CMD12_CTRL) &&
(cmd->opcode == SD_IO_RW_EXTENDED))
c |= NO_CMD12_ISSUE;
}
if (data->flags & MMC_DATA_READ)
c |= TRANSFER_READ;
@ -347,6 +376,40 @@ static int tmio_mmc_start_command(struct tmio_mmc_host *host, struct mmc_command
return 0;
}
static void tmio_mmc_transfer_data(struct tmio_mmc_host *host,
unsigned short *buf,
unsigned int count)
{
int is_read = host->data->flags & MMC_DATA_READ;
u8 *buf8;
/*
* Transfer the data
*/
if (is_read)
sd_ctrl_read16_rep(host, CTL_SD_DATA_PORT, buf, count >> 1);
else
sd_ctrl_write16_rep(host, CTL_SD_DATA_PORT, buf, count >> 1);
/* if count was even number */
if (!(count & 0x1))
return;
/* if count was odd number */
buf8 = (u8 *)(buf + (count >> 1));
/*
* FIXME
*
* driver and this function are assuming that
* it is used as little endian
*/
if (is_read)
*buf8 = sd_ctrl_read16(host, CTL_SD_DATA_PORT) & 0xff;
else
sd_ctrl_write16(host, CTL_SD_DATA_PORT, *buf8);
}
/*
* This chip always returns (at least?) as much data as you ask for.
* I'm unsure what happens if you ask for less than a block. This should be
@ -379,10 +442,7 @@ static void tmio_mmc_pio_irq(struct tmio_mmc_host *host)
count, host->sg_off, data->flags);
/* Transfer the data */
if (data->flags & MMC_DATA_READ)
sd_ctrl_read16_rep(host, CTL_SD_DATA_PORT, buf, count >> 1);
else
sd_ctrl_write16_rep(host, CTL_SD_DATA_PORT, buf, count >> 1);
tmio_mmc_transfer_data(host, buf, count);
host->sg_off += count;
@ -465,6 +525,9 @@ static void tmio_mmc_data_irq(struct tmio_mmc_host *host)
goto out;
if (host->chan_tx && (data->flags & MMC_DATA_WRITE) && !host->force_pio) {
u32 status = sd_ctrl_read32(host, CTL_STATUS);
bool done = false;
/*
* Has all data been written out yet? Testing on SuperH showed,
* that in most cases the first interrupt comes already with the
@ -473,7 +536,15 @@ static void tmio_mmc_data_irq(struct tmio_mmc_host *host)
* DATAEND interrupt with the BUSY bit set, in this cases
* waiting for one more interrupt fixes the problem.
*/
if (!(sd_ctrl_read32(host, CTL_STATUS) & TMIO_STAT_CMD_BUSY)) {
if (host->pdata->flags & TMIO_MMC_HAS_IDLE_WAIT) {
if (status & TMIO_STAT_ILL_FUNC)
done = true;
} else {
if (!(status & TMIO_STAT_CMD_BUSY))
done = true;
}
if (done) {
tmio_mmc_disable_mmc_irqs(host, TMIO_STAT_DATAEND);
tasklet_schedule(&host->dma_complete);
}
@ -557,6 +628,9 @@ static void tmio_mmc_card_irq_status(struct tmio_mmc_host *host,
pr_debug_status(*status);
pr_debug_status(*ireg);
/* Clear the status except the interrupt status */
sd_ctrl_write32(host, CTL_STATUS, TMIO_MASK_IRQ);
}
static bool __tmio_mmc_card_detect_irq(struct tmio_mmc_host *host,
@ -637,6 +711,7 @@ irqreturn_t tmio_mmc_sdio_irq(int irq, void *devid)
struct mmc_host *mmc = host->mmc;
struct tmio_mmc_data *pdata = host->pdata;
unsigned int ireg, status;
unsigned int sdio_status;
if (!(pdata->flags & TMIO_MMC_SDIO_IRQ))
return IRQ_HANDLED;
@ -644,7 +719,11 @@ irqreturn_t tmio_mmc_sdio_irq(int irq, void *devid)
status = sd_ctrl_read16(host, CTL_SDIO_STATUS);
ireg = status & TMIO_SDIO_MASK_ALL & ~host->sdcard_irq_mask;
sd_ctrl_write16(host, CTL_SDIO_STATUS, status & ~TMIO_SDIO_MASK_ALL);
sdio_status = status & ~TMIO_SDIO_MASK_ALL;
if (pdata->flags & TMIO_MMC_SDIO_STATUS_QUIRK)
sdio_status |= 6;
sd_ctrl_write16(host, CTL_SDIO_STATUS, sdio_status);
if (mmc->caps & MMC_CAP_SDIO_IRQ && ireg & TMIO_SDIO_STAT_IOIRQ)
mmc_signal_sdio_irq(mmc);
@ -728,6 +807,8 @@ static void tmio_mmc_request(struct mmc_host *mmc, struct mmc_request *mrq)
spin_unlock_irqrestore(&host->lock, flags);
pm_runtime_get_sync(mmc_dev(mmc));
if (mrq->data) {
ret = tmio_mmc_start_data(host, mrq->data);
if (ret)
@ -746,11 +827,14 @@ static void tmio_mmc_request(struct mmc_host *mmc, struct mmc_request *mrq)
host->mrq = NULL;
mrq->cmd->error = ret;
mmc_request_done(mmc, mrq);
pm_runtime_mark_last_busy(mmc_dev(mmc));
pm_runtime_put_autosuspend(mmc_dev(mmc));
}
static int tmio_mmc_clk_update(struct mmc_host *mmc)
static int tmio_mmc_clk_update(struct tmio_mmc_host *host)
{
struct tmio_mmc_host *host = mmc_priv(mmc);
struct mmc_host *mmc = host->mmc;
struct tmio_mmc_data *pdata = host->pdata;
int ret;
@ -812,6 +896,19 @@ static void tmio_mmc_power_off(struct tmio_mmc_host *host)
host->set_pwr(host->pdev, 0);
}
static void tmio_mmc_set_bus_width(struct tmio_mmc_host *host,
unsigned char bus_width)
{
switch (bus_width) {
case MMC_BUS_WIDTH_1:
sd_ctrl_write16(host, CTL_SD_MEM_CARD_OPT, 0x80e0);
break;
case MMC_BUS_WIDTH_4:
sd_ctrl_write16(host, CTL_SD_MEM_CARD_OPT, 0x00e0);
break;
}
}
/* Set MMC clock / power.
* Note: This controller uses a simple divider scheme therefore it cannot
* run a MMC card at full speed (20MHz). The max clock is 24MHz on SD, but as
@ -824,6 +921,8 @@ static void tmio_mmc_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
struct device *dev = &host->pdev->dev;
unsigned long flags;
pm_runtime_get_sync(mmc_dev(mmc));
mutex_lock(&host->ios_lock);
spin_lock_irqsave(&host->lock, flags);
@ -850,60 +949,22 @@ static void tmio_mmc_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
spin_unlock_irqrestore(&host->lock, flags);
/*
* host->power toggles between false and true in both cases - either
* or not the controller can be runtime-suspended during inactivity.
* But if the controller has to be kept on, the runtime-pm usage_count
* is kept positive, so no suspending actually takes place.
*/
if (ios->power_mode == MMC_POWER_ON && ios->clock) {
if (host->power != TMIO_MMC_ON_RUN) {
tmio_mmc_clk_update(mmc);
pm_runtime_get_sync(dev);
if (host->resuming) {
tmio_mmc_reset(host);
host->resuming = false;
}
}
if (host->power == TMIO_MMC_OFF_STOP)
tmio_mmc_reset(host);
switch (ios->power_mode) {
case MMC_POWER_OFF:
tmio_mmc_power_off(host);
tmio_mmc_clk_stop(host);
break;
case MMC_POWER_UP:
tmio_mmc_set_clock(host, ios->clock);
if (host->power == TMIO_MMC_OFF_STOP)
/* power up SD card and the bus */
tmio_mmc_power_on(host, ios->vdd);
host->power = TMIO_MMC_ON_RUN;
/* start bus clock */
tmio_mmc_power_on(host, ios->vdd);
tmio_mmc_clk_start(host);
} else if (ios->power_mode != MMC_POWER_UP) {
struct tmio_mmc_data *pdata = host->pdata;
unsigned int old_power = host->power;
if (old_power != TMIO_MMC_OFF_STOP) {
if (ios->power_mode == MMC_POWER_OFF) {
tmio_mmc_power_off(host);
host->power = TMIO_MMC_OFF_STOP;
} else {
host->power = TMIO_MMC_ON_STOP;
}
}
if (old_power == TMIO_MMC_ON_RUN) {
tmio_mmc_clk_stop(host);
pm_runtime_put(dev);
if (pdata->clk_disable)
pdata->clk_disable(host->pdev);
}
}
if (host->power != TMIO_MMC_OFF_STOP) {
switch (ios->bus_width) {
case MMC_BUS_WIDTH_1:
sd_ctrl_write16(host, CTL_SD_MEM_CARD_OPT, 0x80e0);
tmio_mmc_set_bus_width(host, ios->bus_width);
break;
case MMC_BUS_WIDTH_4:
sd_ctrl_write16(host, CTL_SD_MEM_CARD_OPT, 0x00e0);
case MMC_POWER_ON:
tmio_mmc_set_clock(host, ios->clock);
tmio_mmc_clk_start(host);
tmio_mmc_set_bus_width(host, ios->bus_width);
break;
}
}
/* Let things settle. delay taken from winCE driver */
@ -915,7 +976,12 @@ static void tmio_mmc_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
ios->clock, ios->power_mode);
host->mrq = NULL;
host->clk_cache = ios->clock;
mutex_unlock(&host->ios_lock);
pm_runtime_mark_last_busy(mmc_dev(mmc));
pm_runtime_put_autosuspend(mmc_dev(mmc));
}
static int tmio_mmc_get_ro(struct mmc_host *mmc)
@ -926,8 +992,25 @@ static int tmio_mmc_get_ro(struct mmc_host *mmc)
if (ret >= 0)
return ret;
return !((pdata->flags & TMIO_MMC_WRPROTECT_DISABLE) ||
(sd_ctrl_read32(host, CTL_STATUS) & TMIO_STAT_WRPROTECT));
pm_runtime_get_sync(mmc_dev(mmc));
ret = !((pdata->flags & TMIO_MMC_WRPROTECT_DISABLE) ||
(sd_ctrl_read32(host, CTL_STATUS) & TMIO_STAT_WRPROTECT));
pm_runtime_mark_last_busy(mmc_dev(mmc));
pm_runtime_put_autosuspend(mmc_dev(mmc));
return ret;
}
static int tmio_multi_io_quirk(struct mmc_card *card,
unsigned int direction, int blk_size)
{
struct tmio_mmc_host *host = mmc_priv(card->host);
struct tmio_mmc_data *pdata = host->pdata;
if (pdata->multi_io_quirk)
return pdata->multi_io_quirk(card, direction, blk_size);
return blk_size;
}
static const struct mmc_host_ops tmio_mmc_ops = {
@ -936,6 +1019,7 @@ static const struct mmc_host_ops tmio_mmc_ops = {
.get_ro = tmio_mmc_get_ro,
.get_cd = mmc_gpio_get_cd,
.enable_sdio_irq = tmio_mmc_enable_sdio_irq,
.multi_io_quirk = tmio_multi_io_quirk,
};
static int tmio_mmc_init_ocr(struct tmio_mmc_host *host)
@ -1032,28 +1116,23 @@ int tmio_mmc_host_probe(struct tmio_mmc_host **host,
mmc->caps & MMC_CAP_NONREMOVABLE ||
mmc->slot.cd_irq >= 0);
_host->power = TMIO_MMC_OFF_STOP;
pm_runtime_enable(&pdev->dev);
ret = pm_runtime_resume(&pdev->dev);
if (ret < 0)
goto pm_disable;
if (tmio_mmc_clk_update(mmc) < 0) {
if (tmio_mmc_clk_update(_host) < 0) {
mmc->f_max = pdata->hclk;
mmc->f_min = mmc->f_max / 512;
}
/*
* There are 4 different scenarios for the card detection:
* 1) an external gpio irq handles the cd (best for power savings)
* 2) internal sdhi irq handles the cd
* 3) a worker thread polls the sdhi - indicated by MMC_CAP_NEEDS_POLL
* 4) the medium is non-removable - indicated by MMC_CAP_NONREMOVABLE
*
* While we increment the runtime PM counter for all scenarios when
* the mmc core activates us by calling an appropriate set_ios(), we
* must additionally ensure that in case 2) the tmio mmc hardware stays
* powered on during runtime for the card detection to work.
* Check the sanity of mmc->f_min to prevent tmio_mmc_set_clock() from
* looping forever...
*/
if (mmc->f_min == 0) {
ret = -EINVAL;
goto host_free;
}
/*
* While using internal tmio hardware logic for card detection, we need
* to ensure it stays powered for it to work.
*/
if (_host->native_hotplug)
pm_runtime_get_noresume(&pdev->dev);
@ -1074,8 +1153,12 @@ int tmio_mmc_host_probe(struct tmio_mmc_host **host,
_host->sdcard_irq_mask &= ~irq_mask;
if (pdata->flags & TMIO_MMC_SDIO_IRQ)
tmio_mmc_enable_sdio_irq(mmc, 0);
_host->sdio_irq_enabled = false;
if (pdata->flags & TMIO_MMC_SDIO_IRQ) {
_host->sdio_irq_mask = TMIO_SDIO_MASK_ALL;
sd_ctrl_write16(_host, CTL_SDIO_IRQ_MASK, _host->sdio_irq_mask);
sd_ctrl_write16(_host, CTL_TRANSACTION_CTL, 0x0000);
}
spin_lock_init(&_host->lock);
mutex_init(&_host->ios_lock);
@ -1087,9 +1170,12 @@ int tmio_mmc_host_probe(struct tmio_mmc_host **host,
/* See if we also get DMA */
tmio_mmc_request_dma(_host, pdata);
pm_runtime_set_active(&pdev->dev);
pm_runtime_set_autosuspend_delay(&pdev->dev, 50);
pm_runtime_use_autosuspend(&pdev->dev);
pm_runtime_enable(&pdev->dev);
ret = mmc_add_host(mmc);
if (pdata->clk_disable)
pdata->clk_disable(pdev);
if (ret < 0) {
tmio_mmc_host_remove(_host);
return ret;
@ -1103,15 +1189,13 @@ int tmio_mmc_host_probe(struct tmio_mmc_host **host,
tmio_mmc_host_remove(_host);
return ret;
}
mmc_gpiod_request_cd_irq(mmc);
}
*host = _host;
return 0;
pm_disable:
pm_runtime_disable(&pdev->dev);
iounmap(_host->ctl);
host_free:
mmc_free_host(mmc);
@ -1142,34 +1226,20 @@ void tmio_mmc_host_remove(struct tmio_mmc_host *host)
}
EXPORT_SYMBOL(tmio_mmc_host_remove);
#ifdef CONFIG_PM_SLEEP
int tmio_mmc_host_suspend(struct device *dev)
#ifdef CONFIG_PM
int tmio_mmc_host_runtime_suspend(struct device *dev)
{
struct mmc_host *mmc = dev_get_drvdata(dev);
struct tmio_mmc_host *host = mmc_priv(mmc);
tmio_mmc_disable_mmc_irqs(host, TMIO_MASK_ALL);
return 0;
}
EXPORT_SYMBOL(tmio_mmc_host_suspend);
int tmio_mmc_host_resume(struct device *dev)
{
struct mmc_host *mmc = dev_get_drvdata(dev);
struct tmio_mmc_host *host = mmc_priv(mmc);
if (host->clk_cache)
tmio_mmc_clk_stop(host);
tmio_mmc_enable_dma(host, true);
if (host->pdata->clk_disable)
host->pdata->clk_disable(host->pdev);
/* The MMC core will perform the complete set up */
host->resuming = true;
return 0;
}
EXPORT_SYMBOL(tmio_mmc_host_resume);
#endif
#ifdef CONFIG_PM_RUNTIME
int tmio_mmc_host_runtime_suspend(struct device *dev)
{
return 0;
}
EXPORT_SYMBOL(tmio_mmc_host_runtime_suspend);
@ -1179,6 +1249,14 @@ int tmio_mmc_host_runtime_resume(struct device *dev)
struct mmc_host *mmc = dev_get_drvdata(dev);
struct tmio_mmc_host *host = mmc_priv(mmc);
tmio_mmc_reset(host);
tmio_mmc_clk_update(host);
if (host->clk_cache) {
tmio_mmc_set_clock(host, host->clk_cache);
tmio_mmc_clk_start(host);
}
tmio_mmc_enable_dma(host, true);
return 0;

View file

@ -803,8 +803,7 @@ static void wbsd_request(struct mmc_host *mmc, struct mmc_request *mrq)
default:
#ifdef CONFIG_MMC_DEBUG
pr_warning("%s: Data command %d is not "
"supported by this controller.\n",
pr_warn("%s: Data command %d is not supported by this controller\n",
mmc_hostname(host->mmc), cmd->opcode);
#endif
cmd->error = -EINVAL;
@ -1429,8 +1428,8 @@ static void wbsd_request_dma(struct wbsd_host *host, int dma)
free_dma(dma);
err:
pr_warning(DRIVER_NAME ": Unable to allocate DMA %d. "
"Falling back on FIFO.\n", dma);
pr_warn(DRIVER_NAME ": Unable to allocate DMA %d - falling back on FIFO\n",
dma);
}
static void wbsd_release_dma(struct wbsd_host *host)
@ -1664,9 +1663,7 @@ static int wbsd_init(struct device *dev, int base, int irq, int dma,
ret = wbsd_scan(host);
if (ret) {
if (pnp && (ret == -ENODEV)) {
pr_warning(DRIVER_NAME
": Unable to confirm device presence. You may "
"experience lock-ups.\n");
pr_warn(DRIVER_NAME ": Unable to confirm device presence - you may experience lock-ups\n");
} else {
wbsd_free_mmc(dev);
return ret;
@ -1688,10 +1685,7 @@ static int wbsd_init(struct device *dev, int base, int irq, int dma,
*/
if (pnp) {
if ((host->config != 0) && !wbsd_chip_validate(host)) {
pr_warning(DRIVER_NAME
": PnP active but chip not configured! "
"You probably have a buggy BIOS. "
"Configuring chip manually.\n");
pr_warn(DRIVER_NAME ": PnP active but chip not configured! You probably have a buggy BIOS. Configuring chip manually.\n");
wbsd_chip_config(host);
}
} else
@ -1884,10 +1878,7 @@ static int wbsd_pnp_resume(struct pnp_dev *pnp_dev)
*/
if (host->config != 0) {
if (!wbsd_chip_validate(host)) {
pr_warning(DRIVER_NAME
": PnP active but chip not configured! "
"You probably have a buggy BIOS. "
"Configuring chip manually.\n");
pr_warn(DRIVER_NAME ": PnP active but chip not configured! You probably have a buggy BIOS. Configuring chip manually.\n");
wbsd_chip_config(host);
}
}

View file

@ -1,6 +1,8 @@
#ifndef __LINUX_ATMEL_MCI_H
#define __LINUX_ATMEL_MCI_H
#include <linux/types.h>
#define ATMCI_MAX_NR_SLOTS 2
/**

View file

@ -5,6 +5,7 @@
#include <linux/fb.h>
#include <linux/io.h>
#include <linux/jiffies.h>
#include <linux/mmc/card.h>
#include <linux/platform_device.h>
#include <linux/pm_runtime.h>
@ -83,6 +84,27 @@
*/
#define TMIO_MMC_HAVE_HIGH_REG (1 << 6)
/*
* Some controllers have CMD12 automatically
* issue/non-issue register
*/
#define TMIO_MMC_HAVE_CMD12_CTRL (1 << 7)
/*
* Some controllers needs to set 1 on SDIO status reserved bits
*/
#define TMIO_MMC_SDIO_STATUS_QUIRK (1 << 8)
/*
* Some controllers have DMA enable/disable register
*/
#define TMIO_MMC_HAVE_CTL_DMA_REG (1 << 9)
/*
* Some controllers allows to set SDx actual clock
*/
#define TMIO_MMC_CLK_ACTUAL (1 << 10)
int tmio_core_mmc_enable(void __iomem *cnf, int shift, unsigned long base);
int tmio_core_mmc_resume(void __iomem *cnf, int shift, unsigned long base);
void tmio_core_mmc_pwr(void __iomem *cnf, int shift, int state);
@ -96,6 +118,7 @@ struct tmio_mmc_dma {
int slave_id_tx;
int slave_id_rx;
int alignment_shift;
dma_addr_t dma_rx_offset;
bool (*filter)(struct dma_chan *chan, void *arg);
};
@ -120,6 +143,8 @@ struct tmio_mmc_data {
/* clock management callbacks */
int (*clk_enable)(struct platform_device *pdev, unsigned int *f);
void (*clk_disable)(struct platform_device *pdev);
int (*multi_io_quirk)(struct mmc_card *card,
unsigned int direction, int blk_size);
};
/*

View file

@ -42,7 +42,8 @@ struct mmc_csd {
unsigned int read_partial:1,
read_misalign:1,
write_partial:1,
write_misalign:1;
write_misalign:1,
dsr_imp:1;
};
struct mmc_ext_csd {
@ -74,7 +75,7 @@ struct mmc_ext_csd {
unsigned int sec_trim_mult; /* Secure trim multiplier */
unsigned int sec_erase_mult; /* Secure erase multiplier */
unsigned int trim_timeout; /* In milliseconds */
bool enhanced_area_en; /* enable bit */
bool partition_setting_completed; /* enable bit */
unsigned long long enhanced_area_offset; /* Units: Byte */
unsigned int enhanced_area_size; /* Units: KB */
unsigned int cache_size; /* Units: KB */
@ -214,11 +215,12 @@ enum mmc_blk_status {
};
/* The number of MMC physical partitions. These consist of:
* boot partitions (2), general purpose partitions (4) in MMC v4.4.
* boot partitions (2), general purpose partitions (4) and
* RPMB partition (1) in MMC v4.4.
*/
#define MMC_NUM_BOOT_PARTITION 2
#define MMC_NUM_GP_PARTITION 4
#define MMC_NUM_PHY_PARTITION 6
#define MMC_NUM_PHY_PARTITION 7
#define MAX_MMC_PART_NAME_LEN 20
/*

View file

@ -26,6 +26,8 @@ enum dw_mci_state {
STATE_DATA_BUSY,
STATE_SENDING_STOP,
STATE_DATA_ERROR,
STATE_SENDING_CMD11,
STATE_WAITING_CMD11_DONE,
};
enum {
@ -188,7 +190,7 @@ struct dw_mci {
/* Workaround flags */
u32 quirks;
struct regulator *vmmc; /* Power regulator */
bool vqmmc_enabled;
unsigned long irq_flags; /* IRQ flags */
int irq;
};

View file

@ -42,6 +42,7 @@ struct mmc_ios {
#define MMC_POWER_OFF 0
#define MMC_POWER_UP 1
#define MMC_POWER_ON 2
#define MMC_POWER_UNDEFINED 3
unsigned char bus_width; /* data bus width */
@ -139,6 +140,13 @@ struct mmc_host_ops {
int (*select_drive_strength)(unsigned int max_dtr, int host_drv, int card_drv);
void (*hw_reset)(struct mmc_host *host);
void (*card_event)(struct mmc_host *host);
/*
* Optional callback to support controllers with HW issues for multiple
* I/O. Returns the number of supported blocks for the request.
*/
int (*multi_io_quirk)(struct mmc_card *card,
unsigned int direction, int blk_size);
};
struct mmc_card;
@ -265,7 +273,6 @@ struct mmc_host {
#define MMC_CAP2_BOOTPART_NOACC (1 << 0) /* Boot partition no access */
#define MMC_CAP2_FULL_PWR_CYCLE (1 << 2) /* Can do full power cycle */
#define MMC_CAP2_NO_MULTI_READ (1 << 3) /* Multiblock reads don't work */
#define MMC_CAP2_HS200_1_8V_SDR (1 << 5) /* can support */
#define MMC_CAP2_HS200_1_2V_SDR (1 << 6) /* can support */
#define MMC_CAP2_HS200 (MMC_CAP2_HS200_1_8V_SDR | \
@ -365,6 +372,9 @@ struct mmc_host {
unsigned int slotno; /* used for sdio acpi binding */
int dsr_req; /* DSR value is valid */
u32 dsr; /* optional driver stage (DSR) value */
unsigned long private[0] ____cacheline_aligned;
};

View file

@ -53,6 +53,11 @@
#define MMC_SEND_TUNING_BLOCK 19 /* adtc R1 */
#define MMC_SEND_TUNING_BLOCK_HS200 21 /* adtc R1 */
#define MMC_TUNING_BLK_PATTERN_4BIT_SIZE 64
#define MMC_TUNING_BLK_PATTERN_8BIT_SIZE 128
extern const u8 tuning_blk_pattern_4bit[MMC_TUNING_BLK_PATTERN_4BIT_SIZE];
extern const u8 tuning_blk_pattern_8bit[MMC_TUNING_BLK_PATTERN_8BIT_SIZE];
/* class 3 */
#define MMC_WRITE_DAT_UNTIL_STOP 20 /* adtc [31:0] data addr R1 */
@ -281,6 +286,7 @@ struct _mmc_csd {
#define EXT_CSD_EXP_EVENTS_CTRL 56 /* R/W, 2 bytes */
#define EXT_CSD_DATA_SECTOR_SIZE 61 /* R */
#define EXT_CSD_GP_SIZE_MULT 143 /* R/W */
#define EXT_CSD_PARTITION_SETTING_COMPLETED 155 /* R/W */
#define EXT_CSD_PARTITION_ATTRIBUTE 156 /* R/W */
#define EXT_CSD_PARTITION_SUPPORT 160 /* RO */
#define EXT_CSD_HPI_MGMT 161 /* R/W */
@ -349,6 +355,7 @@ struct _mmc_csd {
#define EXT_CSD_PART_CONFIG_ACC_RPMB (0x3)
#define EXT_CSD_PART_CONFIG_ACC_GP0 (0x4)
#define EXT_CSD_PART_SETTING_COMPLETED (0x1)
#define EXT_CSD_PART_SUPPORT_PART_EN (0x1)
#define EXT_CSD_CMD_SET_NORMAL (1<<0)

View file

@ -98,6 +98,8 @@ struct sdhci_host {
#define SDHCI_QUIRK2_BROKEN_HS200 (1<<6)
/* Controller does not support DDR50 */
#define SDHCI_QUIRK2_BROKEN_DDR50 (1<<7)
/* Stop command (CMD12) can set Transfer Complete when not using MMC_RSP_BUSY */
#define SDHCI_QUIRK2_STOP_WITH_TC (1<<8)
int irq; /* Device IRQ */
void __iomem *ioaddr; /* Mapped address */
@ -146,6 +148,7 @@ struct sdhci_host {
struct mmc_command *cmd; /* Current command */
struct mmc_data *data; /* Current data request */
unsigned int data_early:1; /* Data finished before cmd */
unsigned int busy_handle:1; /* Handling the order of Busy-end */
struct sg_mapping_iter sg_miter; /* SG state for PIO */
unsigned int blocks; /* remaining PIO blocks */

View file

@ -24,7 +24,10 @@ void mmc_gpio_free_cd(struct mmc_host *host);
int mmc_gpiod_request_cd(struct mmc_host *host, const char *con_id,
unsigned int idx, bool override_active_level,
unsigned int debounce);
unsigned int debounce, bool *gpio_invert);
int mmc_gpiod_request_ro(struct mmc_host *host, const char *con_id,
unsigned int idx, bool override_active_level,
unsigned int debounce, bool *gpio_invert);
void mmc_gpiod_free_cd(struct mmc_host *host);
void mmc_gpiod_request_cd_irq(struct mmc_host *host);