2009-04-07 23:16:42 +00:00
|
|
|
|
/*
|
|
|
|
|
* Copyright © 2008 Intel Corporation
|
|
|
|
|
*
|
|
|
|
|
* Permission is hereby granted, free of charge, to any person obtaining a
|
|
|
|
|
* copy of this software and associated documentation files (the "Software"),
|
|
|
|
|
* to deal in the Software without restriction, including without limitation
|
|
|
|
|
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
|
|
|
|
|
* and/or sell copies of the Software, and to permit persons to whom the
|
|
|
|
|
* Software is furnished to do so, subject to the following conditions:
|
|
|
|
|
*
|
|
|
|
|
* The above copyright notice and this permission notice (including the next
|
|
|
|
|
* paragraph) shall be included in all copies or substantial portions of the
|
|
|
|
|
* Software.
|
|
|
|
|
*
|
|
|
|
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
|
|
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
|
|
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
|
|
|
|
|
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
|
|
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
|
|
|
|
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
|
|
|
|
|
* IN THE SOFTWARE.
|
|
|
|
|
*
|
|
|
|
|
* Authors:
|
|
|
|
|
* Keith Packard <keithp@keithp.com>
|
|
|
|
|
*
|
|
|
|
|
*/
|
|
|
|
|
|
2011-08-30 22:16:33 +00:00
|
|
|
|
#include <linux/export.h>
|
2019-04-05 11:00:03 +00:00
|
|
|
|
#include <linux/i2c.h>
|
2014-07-07 20:01:46 +00:00
|
|
|
|
#include <linux/notifier.h>
|
2019-04-05 11:00:03 +00:00
|
|
|
|
#include <linux/slab.h>
|
2022-02-25 23:46:28 +00:00
|
|
|
|
#include <linux/string_helpers.h>
|
2021-11-30 21:29:09 +00:00
|
|
|
|
#include <linux/timekeeping.h>
|
2019-04-05 11:00:03 +00:00
|
|
|
|
#include <linux/types.h>
|
2019-04-26 08:17:22 +00:00
|
|
|
|
|
2017-01-24 16:21:49 +00:00
|
|
|
|
#include <asm/byteorder.h>
|
2019-04-05 11:00:03 +00:00
|
|
|
|
|
2022-04-21 07:31:02 +00:00
|
|
|
|
#include <drm/display/drm_dp_helper.h>
|
2022-04-21 07:31:05 +00:00
|
|
|
|
#include <drm/display/drm_dsc_helper.h>
|
2022-04-21 07:31:07 +00:00
|
|
|
|
#include <drm/display/drm_hdmi_helper.h>
|
2015-01-23 00:50:32 +00:00
|
|
|
|
#include <drm/drm_atomic_helper.h>
|
2012-10-02 17:01:07 +00:00
|
|
|
|
#include <drm/drm_crtc.h>
|
2022-06-14 09:02:45 +00:00
|
|
|
|
#include <drm/drm_edid.h>
|
2019-01-17 21:03:34 +00:00
|
|
|
|
#include <drm/drm_probe_helper.h>
|
2019-04-05 11:00:03 +00:00
|
|
|
|
|
2021-03-18 16:10:13 +00:00
|
|
|
|
#include "g4x_dp.h"
|
2009-04-07 23:16:42 +00:00
|
|
|
|
#include "i915_drv.h"
|
drm/i915: Check HPD live state during eDP probe
We need to untangle the mess where some SKL machines (at least)
declare both DDI A and DDI E to be present in their VBT, and
both using AUX A. DDI A is a ghost eDP, wheres DDI E may be a
real DP->VGA converter.
Currently that is handled by checking the VBT child devices
for conflicts before output probing. But that kind of solution
will not work for the ADL phantom dual eDP VBTs. I think on
those we just have to probe the eDP first. And would be nice
to use the same probe scheme for everything.
On these SKL systems if we probe DDI A first (which is only
natural given it's declared by VBT first) we will get an answer
via AUX, but it came from the DP->VGA converter hooked to the
DDI E, not DDI A. Thus we mistakenly register eDP on DDI A
and screw up the real DP device in DDI E.
To fix this let's check the HPD live state during the eDP probe.
If we got an answer via DPCD but HPD is still down let's assume
we got the answer from someone else.
Smoke tested on all my eDP machines (ilk,hsw-ult,tgl,adl) and
I also tested turning off all HPD hardware prior to loading
i915 to make sure it all comes up properly. And I simulated
the failure path too by not turning on HPD sense and that
correctly gave up on eDP.
I *think* Windows might just fully depend on HPD here. I
couldn't really find any other way they probe displays. And
I did find code where they also check the live state prior
to AUX transfers (something Imre and I have also talked
about perhaps doing). That would also solve this as we'd
not succeed in the eDP probe DPCD reads.
Other solutions I've considered:
- Reintrduce DDI strap checks on SKL. Unfortunately we just
don't have any idea how reliable they are on real production
hardware, and commit 5a2376d1360b ("drm/i915/skl: WaIgnoreDDIAStrap
is forever, always init DDI A") does suggest that not very.
Sadly that commit is very poor in details :/
Also the systems (Asrock B250M-HDV at least) fixed by commit
41e35ffb380b ("drm/i915: Favor last VBT child device with
conflicting AUX ch/DDC pin") might still not work since we
don't know what their straps indicate. Stupid me for not
asking the reporter to check those at the time :(
We have currently two CI machines (fi-cfl-guc,fi-cfl-8700k
both MS-7B54/Z370M) that also declare both DDI A and DDI E
in VBT to use AUX A, and on these the DDI A strap is also
set. There doesn't seem to be anything hooked up to either
DDI however. But given the DDI A strap is wrong on these it
might well be wrong on the Asrock too.
Most other CI machines seem to have straps that generally
match the VBT. fi-kbl-soraka is an exception though as DDI D
strap is not set, but it is declared in VBT as a DP++ port.
No idea if there's a real physical port to go with it or not.
- Some kind of quirk just for the cases where both DDI A and DDI E
are present in VBT. Might be feasible given we've ignored
DDI A in these cases up to now successfully. But feels rather
unsatisfactory, and not very future proof against funny VBTs.
References: https://bugs.freedesktop.org/show_bug.cgi?id=111966
Reviewed-by: Vinod Govindapillai <vinod.govindapillai@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20230417131728.7705-4-ville.syrjala@linux.intel.com
2023-04-17 13:17:27 +00:00
|
|
|
|
#include "i915_irq.h"
|
2022-11-09 15:35:22 +00:00
|
|
|
|
#include "i915_reg.h"
|
2019-04-29 12:53:31 +00:00
|
|
|
|
#include "intel_atomic.h"
|
2019-04-05 11:00:03 +00:00
|
|
|
|
#include "intel_audio.h"
|
2021-08-25 11:06:50 +00:00
|
|
|
|
#include "intel_backlight.h"
|
2022-01-11 05:15:58 +00:00
|
|
|
|
#include "intel_combo_phy_regs.h"
|
2019-04-05 11:00:06 +00:00
|
|
|
|
#include "intel_connector.h"
|
2021-12-08 11:05:17 +00:00
|
|
|
|
#include "intel_crtc.h"
|
2023-04-28 09:54:21 +00:00
|
|
|
|
#include "intel_cx0_phy.h"
|
2019-04-05 11:00:05 +00:00
|
|
|
|
#include "intel_ddi.h"
|
2021-04-30 14:39:44 +00:00
|
|
|
|
#include "intel_de.h"
|
2019-08-06 11:39:33 +00:00
|
|
|
|
#include "intel_display_types.h"
|
2019-04-05 11:00:17 +00:00
|
|
|
|
#include "intel_dp.h"
|
2021-01-20 10:18:34 +00:00
|
|
|
|
#include "intel_dp_aux.h"
|
2021-04-27 11:45:20 +00:00
|
|
|
|
#include "intel_dp_hdcp.h"
|
2019-04-29 12:29:25 +00:00
|
|
|
|
#include "intel_dp_link_training.h"
|
2019-04-29 12:29:33 +00:00
|
|
|
|
#include "intel_dp_mst.h"
|
2019-05-02 15:02:40 +00:00
|
|
|
|
#include "intel_dpio_phy.h"
|
2021-04-27 11:45:20 +00:00
|
|
|
|
#include "intel_dpll.h"
|
2019-04-29 12:29:24 +00:00
|
|
|
|
#include "intel_fifo_underrun.h"
|
2019-04-05 11:00:13 +00:00
|
|
|
|
#include "intel_hdcp.h"
|
2019-04-05 11:00:18 +00:00
|
|
|
|
#include "intel_hdmi.h"
|
2019-04-29 12:50:11 +00:00
|
|
|
|
#include "intel_hotplug.h"
|
2023-05-15 10:17:37 +00:00
|
|
|
|
#include "intel_hotplug_irq.h"
|
2019-04-05 11:00:11 +00:00
|
|
|
|
#include "intel_lspcon.h"
|
2019-04-05 11:00:22 +00:00
|
|
|
|
#include "intel_lvds.h"
|
2019-04-05 11:00:14 +00:00
|
|
|
|
#include "intel_panel.h"
|
2022-02-21 11:03:56 +00:00
|
|
|
|
#include "intel_pch_display.h"
|
2021-01-08 17:44:09 +00:00
|
|
|
|
#include "intel_pps.h"
|
2019-04-05 11:00:09 +00:00
|
|
|
|
#include "intel_psr.h"
|
2019-06-28 14:36:15 +00:00
|
|
|
|
#include "intel_tc.h"
|
2019-04-29 12:29:32 +00:00
|
|
|
|
#include "intel_vdsc.h"
|
2021-01-22 23:26:31 +00:00
|
|
|
|
#include "intel_vrr.h"
|
2023-03-09 06:28:55 +00:00
|
|
|
|
#include "intel_crtc_state_dump.h"
|
2009-04-07 23:16:42 +00:00
|
|
|
|
|
2018-10-31 00:19:21 +00:00
|
|
|
|
/* DP DSC throughput values used for slice count calculations KPixels/s */
|
|
|
|
|
#define DP_DSC_PEAK_PIXEL_RATE 2720000
|
|
|
|
|
#define DP_DSC_MAX_ENC_THROUGHPUT_0 340000
|
|
|
|
|
#define DP_DSC_MAX_ENC_THROUGHPUT_1 400000
|
|
|
|
|
|
2023-10-24 01:09:06 +00:00
|
|
|
|
/* DP DSC FEC Overhead factor in ppm = 1/(0.972261) = 1.028530 */
|
|
|
|
|
#define DP_DSC_FEC_OVERHEAD_FACTOR 1028530
|
2018-10-31 00:19:21 +00:00
|
|
|
|
|
2015-05-04 14:48:20 +00:00
|
|
|
|
/* Compliance test status bits */
|
|
|
|
|
#define INTEL_DP_RESOLUTION_SHIFT_MASK 0
|
|
|
|
|
#define INTEL_DP_RESOLUTION_PREFERRED (1 << INTEL_DP_RESOLUTION_SHIFT_MASK)
|
|
|
|
|
#define INTEL_DP_RESOLUTION_STANDARD (2 << INTEL_DP_RESOLUTION_SHIFT_MASK)
|
|
|
|
|
#define INTEL_DP_RESOLUTION_FAILSAFE (3 << INTEL_DP_RESOLUTION_SHIFT_MASK)
|
|
|
|
|
|
2021-01-08 17:44:09 +00:00
|
|
|
|
|
2018-10-31 00:19:21 +00:00
|
|
|
|
/* Constants for DP DSC configurations */
|
|
|
|
|
static const u8 valid_dsc_bpp[] = {6, 8, 10, 12, 15};
|
|
|
|
|
|
|
|
|
|
/* With Single pipe configuration, HW is capable of supporting maximum
|
|
|
|
|
* of 4 slices per line.
|
|
|
|
|
*/
|
|
|
|
|
static const u8 valid_dsc_slicecount[] = {1, 2, 4};
|
|
|
|
|
|
2010-10-07 23:01:06 +00:00
|
|
|
|
/**
|
2017-08-18 09:30:20 +00:00
|
|
|
|
* intel_dp_is_edp - is the given port attached to an eDP panel (either CPU or PCH)
|
2010-10-07 23:01:06 +00:00
|
|
|
|
* @intel_dp: DP struct
|
|
|
|
|
*
|
|
|
|
|
* If a CPU or PCH DP output is attached to an eDP panel, this function
|
|
|
|
|
* will return true, and false otherwise.
|
2021-09-01 16:03:58 +00:00
|
|
|
|
*
|
|
|
|
|
* This function is not safe to use prior to encoder type being set.
|
2010-10-07 23:01:06 +00:00
|
|
|
|
*/
|
2017-08-18 09:30:20 +00:00
|
|
|
|
bool intel_dp_is_edp(struct intel_dp *intel_dp)
|
2010-10-07 23:01:06 +00:00
|
|
|
|
{
|
2020-07-01 04:50:54 +00:00
|
|
|
|
struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
|
2012-10-26 21:05:46 +00:00
|
|
|
|
|
2020-07-01 04:50:54 +00:00
|
|
|
|
return dig_port->base.type == INTEL_OUTPUT_EDP;
|
2010-10-07 23:01:06 +00:00
|
|
|
|
}
|
|
|
|
|
|
2016-03-30 12:35:22 +00:00
|
|
|
|
static void intel_dp_unset_edid(struct intel_dp *intel_dp);
|
2009-04-07 23:16:42 +00:00
|
|
|
|
|
2021-09-09 12:51:58 +00:00
|
|
|
|
/* Is link rate UHBR and thus 128b/132b? */
|
|
|
|
|
bool intel_dp_is_uhbr(const struct intel_crtc_state *crtc_state)
|
|
|
|
|
{
|
2023-11-16 13:18:34 +00:00
|
|
|
|
return drm_dp_is_uhbr_rate(crtc_state->port_clock);
|
2021-09-09 12:51:58 +00:00
|
|
|
|
}
|
|
|
|
|
|
2023-11-16 13:18:36 +00:00
|
|
|
|
/**
|
|
|
|
|
* intel_dp_link_symbol_size - get the link symbol size for a given link rate
|
|
|
|
|
* @rate: link rate in 10kbit/s units
|
|
|
|
|
*
|
|
|
|
|
* Returns the link symbol size in bits/symbol units depending on the link
|
|
|
|
|
* rate -> channel coding.
|
|
|
|
|
*/
|
|
|
|
|
int intel_dp_link_symbol_size(int rate)
|
|
|
|
|
{
|
|
|
|
|
return drm_dp_is_uhbr_rate(rate) ? 32 : 10;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/**
|
|
|
|
|
* intel_dp_link_symbol_clock - convert link rate to link symbol clock
|
|
|
|
|
* @rate: link rate in 10kbit/s units
|
|
|
|
|
*
|
|
|
|
|
* Returns the link symbol clock frequency in kHz units depending on the
|
|
|
|
|
* link rate and channel coding.
|
|
|
|
|
*/
|
|
|
|
|
int intel_dp_link_symbol_clock(int rate)
|
|
|
|
|
{
|
|
|
|
|
return DIV_ROUND_CLOSEST(rate * 10, intel_dp_link_symbol_size(rate));
|
2021-09-09 12:51:58 +00:00
|
|
|
|
}
|
|
|
|
|
|
drm/i915/dp: Ensure sink rate values are always valid
Atm, there are no sink rate values set for DP (vs. eDP) sinks until the
DPCD capabilities are successfully read from the sink. During this time
intel_dp->num_common_rates is 0 which can lead to a
intel_dp->common_rates[-1] (*)
access, which is an undefined behaviour, in the following cases:
- In intel_dp_sync_state(), if the encoder is enabled without a sink
connected to the encoder's connector (BIOS enabled a monitor, but the
user unplugged the monitor until the driver loaded).
- In intel_dp_sync_state() if the encoder is enabled with a sink
connected, but for some reason the DPCD read has failed.
- In intel_dp_compute_link_config() if modesetting a connector without
a sink connected on it.
- In intel_dp_compute_link_config() if modesetting a connector with a
a sink connected on it, but before probing the connector first.
To avoid the (*) access in all the above cases, make sure that the sink
rate table - and hence the common rate table - is always valid, by
setting a default minimum sink rate when registering the connector
before anything could use it.
I also considered setting all the DP link rates by default, so that
modesetting with higher resolution modes also succeeds in the last two
cases above. However in case a sink is not connected that would stop
working after the first modeset, due to the LT fallback logic. So this
would need more work, beyond the scope of this fix.
As I mentioned in the previous patch, I don't think the issue this patch
fixes is user visible, however it is an undefined behaviour by
definition and triggers a BUG() in CONFIG_UBSAN builds, hence CC:stable.
v2: Clear the default sink rates, before initializing these for eDP.
Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/4297
References: https://gitlab.freedesktop.org/drm/intel/-/issues/4298
Suggested-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Acked-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20211018143417.1452632-1-imre.deak@intel.com
2021-10-18 14:34:17 +00:00
|
|
|
|
static void intel_dp_set_default_sink_rates(struct intel_dp *intel_dp)
|
|
|
|
|
{
|
|
|
|
|
intel_dp->sink_rates[0] = 162000;
|
|
|
|
|
intel_dp->num_sink_rates = 1;
|
|
|
|
|
}
|
|
|
|
|
|
2017-03-28 14:59:05 +00:00
|
|
|
|
/* update sink rates from dpcd */
|
2021-10-18 09:41:53 +00:00
|
|
|
|
static void intel_dp_set_dpcd_sink_rates(struct intel_dp *intel_dp)
|
2017-03-28 14:59:05 +00:00
|
|
|
|
{
|
2018-02-27 10:59:11 +00:00
|
|
|
|
static const int dp_rates[] = {
|
2018-02-28 22:31:50 +00:00
|
|
|
|
162000, 270000, 540000, 810000
|
2018-02-27 10:59:11 +00:00
|
|
|
|
};
|
2017-10-09 09:29:59 +00:00
|
|
|
|
int i, max_rate;
|
2020-10-07 17:09:17 +00:00
|
|
|
|
int max_lttpr_rate;
|
2017-03-28 14:59:05 +00:00
|
|
|
|
|
2020-09-15 16:49:13 +00:00
|
|
|
|
if (drm_dp_has_quirk(&intel_dp->desc, DP_DPCD_QUIRK_CAN_DO_MAX_LINK_RATE_3_24_GBPS)) {
|
2020-03-16 04:23:40 +00:00
|
|
|
|
/* Needed, e.g., for Apple MBP 2017, 15 inch eDP Retina panel */
|
|
|
|
|
static const int quirk_rates[] = { 162000, 270000, 324000 };
|
|
|
|
|
|
|
|
|
|
memcpy(intel_dp->sink_rates, quirk_rates, sizeof(quirk_rates));
|
|
|
|
|
intel_dp->num_sink_rates = ARRAY_SIZE(quirk_rates);
|
|
|
|
|
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
|
2021-10-07 10:57:27 +00:00
|
|
|
|
/*
|
|
|
|
|
* Sink rates for 8b/10b.
|
|
|
|
|
*/
|
2017-10-09 09:29:59 +00:00
|
|
|
|
max_rate = drm_dp_bw_code_to_link_rate(intel_dp->dpcd[DP_MAX_LINK_RATE]);
|
2020-10-07 17:09:17 +00:00
|
|
|
|
max_lttpr_rate = drm_dp_lttpr_max_link_rate(intel_dp->lttpr_common_caps);
|
|
|
|
|
if (max_lttpr_rate)
|
|
|
|
|
max_rate = min(max_rate, max_lttpr_rate);
|
2017-03-28 14:59:05 +00:00
|
|
|
|
|
2018-02-27 10:59:11 +00:00
|
|
|
|
for (i = 0; i < ARRAY_SIZE(dp_rates); i++) {
|
|
|
|
|
if (dp_rates[i] > max_rate)
|
2017-10-09 09:29:59 +00:00
|
|
|
|
break;
|
2018-02-27 10:59:11 +00:00
|
|
|
|
intel_dp->sink_rates[i] = dp_rates[i];
|
2017-10-09 09:29:59 +00:00
|
|
|
|
}
|
2017-03-28 14:59:05 +00:00
|
|
|
|
|
2021-08-23 16:18:07 +00:00
|
|
|
|
/*
|
|
|
|
|
* Sink rates for 128b/132b. If set, sink should support all 8b/10b
|
|
|
|
|
* rates and 10 Gbps.
|
|
|
|
|
*/
|
|
|
|
|
if (intel_dp->dpcd[DP_MAIN_LINK_CHANNEL_CODING] & DP_CAP_ANSI_128B132B) {
|
|
|
|
|
u8 uhbr_rates = 0;
|
|
|
|
|
|
|
|
|
|
BUILD_BUG_ON(ARRAY_SIZE(intel_dp->sink_rates) < ARRAY_SIZE(dp_rates) + 3);
|
|
|
|
|
|
|
|
|
|
drm_dp_dpcd_readb(&intel_dp->aux,
|
|
|
|
|
DP_128B132B_SUPPORTED_LINK_RATES, &uhbr_rates);
|
|
|
|
|
|
2021-10-07 10:57:27 +00:00
|
|
|
|
if (drm_dp_lttpr_count(intel_dp->lttpr_common_caps)) {
|
|
|
|
|
/* We have a repeater */
|
|
|
|
|
if (intel_dp->lttpr_common_caps[0] >= 0x20 &&
|
|
|
|
|
intel_dp->lttpr_common_caps[DP_MAIN_LINK_CHANNEL_CODING_PHY_REPEATER -
|
|
|
|
|
DP_LT_TUNABLE_PHY_REPEATER_FIELD_DATA_STRUCTURE_REV] &
|
|
|
|
|
DP_PHY_REPEATER_128B132B_SUPPORTED) {
|
|
|
|
|
/* Repeater supports 128b/132b, valid UHBR rates */
|
|
|
|
|
uhbr_rates &= intel_dp->lttpr_common_caps[DP_PHY_REPEATER_128B132B_RATES -
|
|
|
|
|
DP_LT_TUNABLE_PHY_REPEATER_FIELD_DATA_STRUCTURE_REV];
|
|
|
|
|
} else {
|
|
|
|
|
/* Does not support 128b/132b */
|
|
|
|
|
uhbr_rates = 0;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2021-08-23 16:18:07 +00:00
|
|
|
|
if (uhbr_rates & DP_UHBR10)
|
|
|
|
|
intel_dp->sink_rates[i++] = 1000000;
|
|
|
|
|
if (uhbr_rates & DP_UHBR13_5)
|
|
|
|
|
intel_dp->sink_rates[i++] = 1350000;
|
|
|
|
|
if (uhbr_rates & DP_UHBR20)
|
|
|
|
|
intel_dp->sink_rates[i++] = 2000000;
|
|
|
|
|
}
|
|
|
|
|
|
2017-10-09 09:29:59 +00:00
|
|
|
|
intel_dp->num_sink_rates = i;
|
2017-03-28 14:59:05 +00:00
|
|
|
|
}
|
|
|
|
|
|
2021-10-18 09:41:53 +00:00
|
|
|
|
static void intel_dp_set_sink_rates(struct intel_dp *intel_dp)
|
|
|
|
|
{
|
|
|
|
|
struct intel_connector *connector = intel_dp->attached_connector;
|
|
|
|
|
struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
|
|
|
|
|
struct intel_encoder *encoder = &intel_dig_port->base;
|
|
|
|
|
|
|
|
|
|
intel_dp_set_dpcd_sink_rates(intel_dp);
|
|
|
|
|
|
|
|
|
|
if (intel_dp->num_sink_rates)
|
|
|
|
|
return;
|
|
|
|
|
|
|
|
|
|
drm_err(&dp_to_i915(intel_dp)->drm,
|
|
|
|
|
"[CONNECTOR:%d:%s][ENCODER:%d:%s] Invalid DPCD with no link rates, using defaults\n",
|
|
|
|
|
connector->base.base.id, connector->base.name,
|
|
|
|
|
encoder->base.base.id, encoder->base.name);
|
|
|
|
|
|
|
|
|
|
intel_dp_set_default_sink_rates(intel_dp);
|
|
|
|
|
}
|
|
|
|
|
|
2021-10-18 09:41:52 +00:00
|
|
|
|
static void intel_dp_set_default_max_sink_lane_count(struct intel_dp *intel_dp)
|
|
|
|
|
{
|
|
|
|
|
intel_dp->max_sink_lane_count = 1;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static void intel_dp_set_max_sink_lane_count(struct intel_dp *intel_dp)
|
|
|
|
|
{
|
|
|
|
|
struct intel_connector *connector = intel_dp->attached_connector;
|
|
|
|
|
struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
|
|
|
|
|
struct intel_encoder *encoder = &intel_dig_port->base;
|
|
|
|
|
|
|
|
|
|
intel_dp->max_sink_lane_count = drm_dp_max_lane_count(intel_dp->dpcd);
|
|
|
|
|
|
|
|
|
|
switch (intel_dp->max_sink_lane_count) {
|
|
|
|
|
case 1:
|
|
|
|
|
case 2:
|
|
|
|
|
case 4:
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
drm_err(&dp_to_i915(intel_dp)->drm,
|
|
|
|
|
"[CONNECTOR:%d:%s][ENCODER:%d:%s] Invalid DPCD max lane count (%d), using default\n",
|
|
|
|
|
connector->base.base.id, connector->base.name,
|
|
|
|
|
encoder->base.base.id, encoder->base.name,
|
|
|
|
|
intel_dp->max_sink_lane_count);
|
|
|
|
|
|
|
|
|
|
intel_dp_set_default_max_sink_lane_count(intel_dp);
|
|
|
|
|
}
|
|
|
|
|
|
2018-02-01 11:03:41 +00:00
|
|
|
|
/* Get length of rates array potentially limited by max_rate. */
|
|
|
|
|
static int intel_dp_rate_limit_len(const int *rates, int len, int max_rate)
|
|
|
|
|
{
|
|
|
|
|
int i;
|
|
|
|
|
|
|
|
|
|
/* Limit results by potentially reduced max rate */
|
|
|
|
|
for (i = 0; i < len; i++) {
|
|
|
|
|
if (rates[len - i - 1] <= max_rate)
|
|
|
|
|
return len - i;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/* Get length of common rates array potentially limited by max_rate. */
|
|
|
|
|
static int intel_dp_common_len_rate_limit(const struct intel_dp *intel_dp,
|
|
|
|
|
int max_rate)
|
|
|
|
|
{
|
|
|
|
|
return intel_dp_rate_limit_len(intel_dp->common_rates,
|
|
|
|
|
intel_dp->num_common_rates, max_rate);
|
|
|
|
|
}
|
|
|
|
|
|
2021-10-18 09:41:54 +00:00
|
|
|
|
static int intel_dp_common_rate(struct intel_dp *intel_dp, int index)
|
|
|
|
|
{
|
|
|
|
|
if (drm_WARN_ON(&dp_to_i915(intel_dp)->drm,
|
|
|
|
|
index < 0 || index >= intel_dp->num_common_rates))
|
|
|
|
|
return 162000;
|
|
|
|
|
|
|
|
|
|
return intel_dp->common_rates[index];
|
|
|
|
|
}
|
|
|
|
|
|
2017-04-06 13:44:13 +00:00
|
|
|
|
/* Theoretical max between source and sink */
|
|
|
|
|
static int intel_dp_max_common_rate(struct intel_dp *intel_dp)
|
2009-04-07 23:16:42 +00:00
|
|
|
|
{
|
2021-10-18 09:41:54 +00:00
|
|
|
|
return intel_dp_common_rate(intel_dp, intel_dp->num_common_rates - 1);
|
2009-04-07 23:16:42 +00:00
|
|
|
|
}
|
|
|
|
|
|
2022-07-15 20:20:44 +00:00
|
|
|
|
static int intel_dp_max_source_lane_count(struct intel_digital_port *dig_port)
|
|
|
|
|
{
|
2023-02-16 00:04:25 +00:00
|
|
|
|
int vbt_max_lanes = intel_bios_dp_max_lane_count(dig_port->base.devdata);
|
2022-07-15 20:20:44 +00:00
|
|
|
|
int max_lanes = dig_port->max_lanes;
|
|
|
|
|
|
|
|
|
|
if (vbt_max_lanes)
|
|
|
|
|
max_lanes = min(max_lanes, vbt_max_lanes);
|
|
|
|
|
|
|
|
|
|
return max_lanes;
|
|
|
|
|
}
|
|
|
|
|
|
2017-04-06 13:44:13 +00:00
|
|
|
|
/* Theoretical max between source and sink */
|
|
|
|
|
static int intel_dp_max_common_lane_count(struct intel_dp *intel_dp)
|
2014-05-06 11:56:50 +00:00
|
|
|
|
{
|
2020-07-01 04:50:54 +00:00
|
|
|
|
struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
|
2022-07-15 20:20:44 +00:00
|
|
|
|
int source_max = intel_dp_max_source_lane_count(dig_port);
|
2021-10-18 09:41:52 +00:00
|
|
|
|
int sink_max = intel_dp->max_sink_lane_count;
|
2023-08-25 08:16:38 +00:00
|
|
|
|
int lane_max = intel_tc_port_max_lane_count(dig_port);
|
2020-10-07 17:09:17 +00:00
|
|
|
|
int lttpr_max = drm_dp_lttpr_max_lane_count(intel_dp->lttpr_common_caps);
|
|
|
|
|
|
|
|
|
|
if (lttpr_max)
|
|
|
|
|
sink_max = min(sink_max, lttpr_max);
|
2014-05-06 11:56:50 +00:00
|
|
|
|
|
2023-08-25 08:16:38 +00:00
|
|
|
|
return min3(source_max, sink_max, lane_max);
|
2014-05-06 11:56:50 +00:00
|
|
|
|
}
|
|
|
|
|
|
2017-04-06 13:44:14 +00:00
|
|
|
|
int intel_dp_max_lane_count(struct intel_dp *intel_dp)
|
2017-04-06 13:44:13 +00:00
|
|
|
|
{
|
2021-10-18 09:41:52 +00:00
|
|
|
|
switch (intel_dp->max_link_lane_count) {
|
|
|
|
|
case 1:
|
|
|
|
|
case 2:
|
|
|
|
|
case 4:
|
|
|
|
|
return intel_dp->max_link_lane_count;
|
|
|
|
|
default:
|
|
|
|
|
MISSING_CASE(intel_dp->max_link_lane_count);
|
|
|
|
|
return 1;
|
|
|
|
|
}
|
2017-04-06 13:44:13 +00:00
|
|
|
|
}
|
|
|
|
|
|
2021-08-23 16:18:12 +00:00
|
|
|
|
/*
|
|
|
|
|
* The required data bandwidth for a mode with given pixel clock and bpp. This
|
|
|
|
|
* is the required net bandwidth independent of the data bandwidth efficiency.
|
2023-11-17 15:09:29 +00:00
|
|
|
|
*
|
|
|
|
|
* TODO: check if callers of this functions should use
|
|
|
|
|
* intel_dp_effective_data_rate() instead.
|
2021-08-23 16:18:12 +00:00
|
|
|
|
*/
|
2016-11-15 20:59:06 +00:00
|
|
|
|
int
|
2012-01-25 16:16:25 +00:00
|
|
|
|
intel_dp_link_required(int pixel_clock, int bpp)
|
2009-04-07 23:16:42 +00:00
|
|
|
|
{
|
drm/i915: Fix DP link rate math
We store DP link rates as link clock frequencies in kHz, just like all
other clock values. But, DP link rates in the DP Spec. are expressed in
Gbps/lane, which seems to have led to some confusion.
E.g., for HBR2
Max. data rate = 5.4 Gbps/lane x 4 lane x 8/10 x 1/8 = 2160000 kBps
where, 8/10 is for channel encoding and 1/8 is for bit to Byte conversion
Using link clock frequency, like we do
Max. data rate = 540000 kHz * 4 lanes = 2160000 kSymbols/s
Because, each symbol has 8 bit of data, this is 2160000 kBps
and there is no need to account for channel encoding here.
But, currently we do 540000 kHz * 4 lanes * (8/10) = 1728000 kBps
Similarly, while computing the required link bandwidth for a mode,
there is a mysterious 1/10 term.
This should simply be pixel_clock kHz * (bpp/8) to give the final result in
kBps
v2: Changed to DIV_ROUND_UP() and comment changes (Ville)
Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1479160220-17794-1-git-send-email-dhinakaran.pandiyan@intel.com
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
2016-11-14 21:50:20 +00:00
|
|
|
|
/* pixel_clock is in kHz, divide bpp by 8 for bit to Byte conversion */
|
|
|
|
|
return DIV_ROUND_UP(pixel_clock * bpp, 8);
|
2009-04-07 23:16:42 +00:00
|
|
|
|
}
|
|
|
|
|
|
2023-11-17 15:09:29 +00:00
|
|
|
|
/**
|
|
|
|
|
* intel_dp_effective_data_rate - Return the pixel data rate accounting for BW allocation overhead
|
|
|
|
|
* @pixel_clock: pixel clock in kHz
|
|
|
|
|
* @bpp_x16: bits per pixel .4 fixed point format
|
|
|
|
|
* @bw_overhead: BW allocation overhead in 1ppm units
|
|
|
|
|
*
|
|
|
|
|
* Return the effective pixel data rate in kB/sec units taking into account
|
|
|
|
|
* the provided SSC, FEC, DSC BW allocation overhead.
|
|
|
|
|
*/
|
|
|
|
|
int intel_dp_effective_data_rate(int pixel_clock, int bpp_x16,
|
|
|
|
|
int bw_overhead)
|
|
|
|
|
{
|
|
|
|
|
return DIV_ROUND_UP_ULL(mul_u32_u32(pixel_clock * bpp_x16, bw_overhead),
|
|
|
|
|
1000000 * 16 * 8);
|
|
|
|
|
}
|
|
|
|
|
|
2021-08-23 16:18:12 +00:00
|
|
|
|
/*
|
|
|
|
|
* Given a link rate and lanes, get the data bandwidth.
|
|
|
|
|
*
|
|
|
|
|
* Data bandwidth is the actual payload rate, which depends on the data
|
|
|
|
|
* bandwidth efficiency and the link rate.
|
|
|
|
|
*
|
|
|
|
|
* For 8b/10b channel encoding, SST and non-FEC, the data bandwidth efficiency
|
|
|
|
|
* is 80%. For example, for a 1.62 Gbps link, 1.62*10^9 bps * 0.80 * (1/8) =
|
|
|
|
|
* 162000 kBps. With 8-bit symbols, we have 162000 kHz symbol clock. Just by
|
|
|
|
|
* coincidence, the port clock in kHz matches the data bandwidth in kBps, and
|
|
|
|
|
* they equal the link bit rate in Gbps multiplied by 100000. (Note that this no
|
|
|
|
|
* longer holds for data bandwidth as soon as FEC or MST is taken into account!)
|
|
|
|
|
*
|
|
|
|
|
* For 128b/132b channel encoding, the data bandwidth efficiency is 96.71%. For
|
|
|
|
|
* example, for a 10 Gbps link, 10*10^9 bps * 0.9671 * (1/8) = 1208875
|
|
|
|
|
* kBps. With 32-bit symbols, we have 312500 kHz symbol clock. The value 1000000
|
|
|
|
|
* does not match the symbol clock, the port clock (not even if you think in
|
|
|
|
|
* terms of a byte clock), nor the data bandwidth. It only matches the link bit
|
|
|
|
|
* rate in units of 10000 bps.
|
|
|
|
|
*/
|
2016-11-15 20:59:06 +00:00
|
|
|
|
int
|
2021-08-23 16:18:12 +00:00
|
|
|
|
intel_dp_max_data_rate(int max_link_rate, int max_lanes)
|
2010-06-30 01:46:17 +00:00
|
|
|
|
{
|
2023-11-16 13:18:40 +00:00
|
|
|
|
int ch_coding_efficiency =
|
|
|
|
|
drm_dp_bw_channel_coding_efficiency(drm_dp_is_uhbr_rate(max_link_rate));
|
|
|
|
|
int max_link_rate_kbps = max_link_rate * 10;
|
2021-08-23 16:18:12 +00:00
|
|
|
|
|
2023-11-16 13:18:40 +00:00
|
|
|
|
/*
|
|
|
|
|
* UHBR rates always use 128b/132b channel encoding, and have
|
|
|
|
|
* 97.71% data bandwidth efficiency. Consider max_link_rate the
|
|
|
|
|
* link bit rate in units of 10000 bps.
|
|
|
|
|
*/
|
2021-08-23 16:18:12 +00:00
|
|
|
|
/*
|
|
|
|
|
* Lower than UHBR rates always use 8b/10b channel encoding, and have
|
|
|
|
|
* 80% data bandwidth efficiency for SST non-FEC. However, this turns
|
2023-11-16 13:18:40 +00:00
|
|
|
|
* out to be a nop by coincidence:
|
2021-08-23 16:18:12 +00:00
|
|
|
|
*
|
|
|
|
|
* int max_link_rate_kbps = max_link_rate * 10;
|
2023-11-16 13:18:39 +00:00
|
|
|
|
* max_link_rate_kbps = DIV_ROUND_DOWN_ULL(max_link_rate_kbps * 8, 10);
|
2021-08-23 16:18:12 +00:00
|
|
|
|
* max_link_rate = max_link_rate_kbps / 8;
|
drm/i915: Fix DP link rate math
We store DP link rates as link clock frequencies in kHz, just like all
other clock values. But, DP link rates in the DP Spec. are expressed in
Gbps/lane, which seems to have led to some confusion.
E.g., for HBR2
Max. data rate = 5.4 Gbps/lane x 4 lane x 8/10 x 1/8 = 2160000 kBps
where, 8/10 is for channel encoding and 1/8 is for bit to Byte conversion
Using link clock frequency, like we do
Max. data rate = 540000 kHz * 4 lanes = 2160000 kSymbols/s
Because, each symbol has 8 bit of data, this is 2160000 kBps
and there is no need to account for channel encoding here.
But, currently we do 540000 kHz * 4 lanes * (8/10) = 1728000 kBps
Similarly, while computing the required link bandwidth for a mode,
there is a mysterious 1/10 term.
This should simply be pixel_clock kHz * (bpp/8) to give the final result in
kBps
v2: Changed to DIV_ROUND_UP() and comment changes (Ville)
Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1479160220-17794-1-git-send-email-dhinakaran.pandiyan@intel.com
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
2016-11-14 21:50:20 +00:00
|
|
|
|
*/
|
2023-11-16 13:18:40 +00:00
|
|
|
|
return DIV_ROUND_DOWN_ULL(mul_u32_u32(max_link_rate_kbps * max_lanes,
|
|
|
|
|
ch_coding_efficiency),
|
|
|
|
|
1000000 * 8);
|
2010-06-30 01:46:17 +00:00
|
|
|
|
}
|
|
|
|
|
|
2020-11-17 19:47:05 +00:00
|
|
|
|
bool intel_dp_can_bigjoiner(struct intel_dp *intel_dp)
|
|
|
|
|
{
|
|
|
|
|
struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
|
|
|
|
|
struct intel_encoder *encoder = &intel_dig_port->base;
|
|
|
|
|
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
|
|
|
|
|
2021-03-20 04:42:42 +00:00
|
|
|
|
return DISPLAY_VER(dev_priv) >= 12 ||
|
drm/i915/display: rename display version macros
While converting the rest of the driver to use GRAPHICS_VER() and
MEDIA_VER(), following what was done for display, some discussions went
back on what we did for display:
1) Why is the == comparison special that deserves a separate
macro instead of just getting the version and comparing directly
like is done for >, >=, <=?
2) IS_DISPLAY_RANGE() is weird in that it omits the "_VER" for
brevity. If we remove the current users of IS_DISPLAY_VER(), we
could actually repurpose it for a range check
With (1) there could be an advantage if we used gen_mask since multiple
conditionals be combined by the compiler in a single and instruction and
check the result. However a) INTEL_GEN() doesn't use the mask since it
would make the code bigger everywhere else and b) in the cases it made
sense, it also made sense to convert to the _RANGE() variant.
So here we repurpose IS_DISPLAY_VER() to work with a [ from, to ] range
like was the IS_DISPLAY_RANGE() and convert the current IS_DISPLAY_VER()
users to use == and != operators. Aside from the definition changes,
this was done by the following semantic patch:
@@ expression dev_priv, E1; @@
- !IS_DISPLAY_VER(dev_priv, E1)
+ DISPLAY_VER(dev_priv) != E1
@@ expression dev_priv, E1; @@
- IS_DISPLAY_VER(dev_priv, E1)
+ DISPLAY_VER(dev_priv) == E1
@@ expression dev_priv, from, until; @@
- IS_DISPLAY_RANGE(dev_priv, from, until)
+ IS_DISPLAY_VER(dev_priv, from, until)
Cc: Jani Nikula <jani.nikula@intel.com>
Cc: Matt Roper <matthew.d.roper@intel.com>
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
[Jani: Minor conflict resolve while applying.]
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20210413051002.92589-4-lucas.demarchi@intel.com
2021-04-13 05:09:53 +00:00
|
|
|
|
(DISPLAY_VER(dev_priv) == 11 &&
|
2020-11-17 19:47:05 +00:00
|
|
|
|
encoder->port != PORT_A);
|
|
|
|
|
}
|
|
|
|
|
|
2021-08-23 16:18:11 +00:00
|
|
|
|
static int dg2_max_source_rate(struct intel_dp *intel_dp)
|
|
|
|
|
{
|
|
|
|
|
return intel_dp_is_edp(intel_dp) ? 810000 : 1350000;
|
|
|
|
|
}
|
|
|
|
|
|
2018-06-11 22:26:54 +00:00
|
|
|
|
static int icl_max_source_rate(struct intel_dp *intel_dp)
|
|
|
|
|
{
|
|
|
|
|
struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
|
2018-12-17 22:13:47 +00:00
|
|
|
|
struct drm_i915_private *dev_priv = to_i915(dig_port->base.base.dev);
|
2019-07-09 18:39:33 +00:00
|
|
|
|
enum phy phy = intel_port_to_phy(dev_priv, dig_port->base.port);
|
2018-06-11 22:26:54 +00:00
|
|
|
|
|
2022-09-02 07:03:19 +00:00
|
|
|
|
if (intel_phy_is_combo(dev_priv, phy) && !intel_dp_is_edp(intel_dp))
|
2018-06-11 22:26:54 +00:00
|
|
|
|
return 540000;
|
|
|
|
|
|
|
|
|
|
return 810000;
|
|
|
|
|
}
|
|
|
|
|
|
2020-10-05 17:54:47 +00:00
|
|
|
|
static int ehl_max_source_rate(struct intel_dp *intel_dp)
|
|
|
|
|
{
|
2022-09-02 07:03:19 +00:00
|
|
|
|
if (intel_dp_is_edp(intel_dp))
|
2021-09-01 16:04:01 +00:00
|
|
|
|
return 540000;
|
|
|
|
|
|
|
|
|
|
return 810000;
|
|
|
|
|
}
|
|
|
|
|
|
2023-04-13 21:24:35 +00:00
|
|
|
|
static int mtl_max_source_rate(struct intel_dp *intel_dp)
|
|
|
|
|
{
|
2023-04-28 09:54:21 +00:00
|
|
|
|
struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
|
|
|
|
|
struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev);
|
|
|
|
|
enum phy phy = intel_port_to_phy(i915, dig_port->base.port);
|
|
|
|
|
|
|
|
|
|
if (intel_is_c10phy(i915, phy))
|
2023-10-18 11:36:22 +00:00
|
|
|
|
return 810000;
|
2023-04-28 09:54:21 +00:00
|
|
|
|
|
|
|
|
|
return 2000000;
|
2023-04-13 21:24:35 +00:00
|
|
|
|
}
|
|
|
|
|
|
2022-06-02 20:57:23 +00:00
|
|
|
|
static int vbt_max_link_rate(struct intel_dp *intel_dp)
|
|
|
|
|
{
|
|
|
|
|
struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
|
|
|
|
|
int max_rate;
|
|
|
|
|
|
2023-02-16 00:04:25 +00:00
|
|
|
|
max_rate = intel_bios_dp_max_link_rate(encoder->devdata);
|
2022-06-02 20:57:23 +00:00
|
|
|
|
|
|
|
|
|
if (intel_dp_is_edp(intel_dp)) {
|
|
|
|
|
struct intel_connector *connector = intel_dp->attached_connector;
|
|
|
|
|
int edp_max_rate = connector->panel.vbt.edp.max_link_rate;
|
|
|
|
|
|
|
|
|
|
if (max_rate && edp_max_rate)
|
|
|
|
|
max_rate = min(max_rate, edp_max_rate);
|
|
|
|
|
else if (edp_max_rate)
|
|
|
|
|
max_rate = edp_max_rate;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
return max_rate;
|
|
|
|
|
}
|
|
|
|
|
|
2017-03-28 14:59:04 +00:00
|
|
|
|
static void
|
|
|
|
|
intel_dp_set_source_rates(struct intel_dp *intel_dp)
|
2016-10-26 23:25:55 +00:00
|
|
|
|
{
|
2018-02-27 10:59:11 +00:00
|
|
|
|
/* The values must be in increasing order */
|
2023-04-13 21:24:35 +00:00
|
|
|
|
static const int mtl_rates[] = {
|
|
|
|
|
162000, 216000, 243000, 270000, 324000, 432000, 540000, 675000,
|
2023-04-28 09:54:21 +00:00
|
|
|
|
810000, 1000000, 1350000, 2000000,
|
2023-04-13 21:24:35 +00:00
|
|
|
|
};
|
2021-07-28 21:59:30 +00:00
|
|
|
|
static const int icl_rates[] = {
|
2021-08-23 16:18:11 +00:00
|
|
|
|
162000, 216000, 270000, 324000, 432000, 540000, 648000, 810000,
|
|
|
|
|
1000000, 1350000,
|
2018-02-27 10:59:11 +00:00
|
|
|
|
};
|
|
|
|
|
static const int bxt_rates[] = {
|
|
|
|
|
162000, 216000, 243000, 270000, 324000, 432000, 540000
|
|
|
|
|
};
|
|
|
|
|
static const int skl_rates[] = {
|
|
|
|
|
162000, 216000, 270000, 324000, 432000, 540000
|
|
|
|
|
};
|
|
|
|
|
static const int hsw_rates[] = {
|
|
|
|
|
162000, 270000, 540000
|
|
|
|
|
};
|
|
|
|
|
static const int g4x_rates[] = {
|
|
|
|
|
162000, 270000
|
|
|
|
|
};
|
2016-10-26 23:25:55 +00:00
|
|
|
|
struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
|
|
|
|
|
struct drm_i915_private *dev_priv = to_i915(dig_port->base.base.dev);
|
2017-03-28 14:59:04 +00:00
|
|
|
|
const int *source_rates;
|
2020-01-17 14:29:25 +00:00
|
|
|
|
int size, max_rate = 0, vbt_max_rate;
|
2016-10-26 23:25:55 +00:00
|
|
|
|
|
2017-03-28 14:59:04 +00:00
|
|
|
|
/* This should only be done once */
|
drm/i915/display/dp: Make WARN* drm specific where drm_device ptr is available
drm specific WARN* calls include device information in the
backtrace, so we know what device the warnings originate from.
Covert all the calls of WARN* with device specific drm_WARN*
variants in functions where drm_device or drm_i915_private struct
pointer is readily available.
The conversion was done automatically with below coccinelle semantic
patch. checkpatch errors/warnings are fixed manually.
@rule1@
identifier func, T;
@@
func(...) {
...
struct drm_device *T = ...;
<...
(
-WARN(
+drm_WARN(T,
...)
|
-WARN_ON(
+drm_WARN_ON(T,
...)
|
-WARN_ONCE(
+drm_WARN_ONCE(T,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(T,
...)
)
...>
}
@rule2@
identifier func, T;
@@
func(struct drm_device *T,...) {
<...
(
-WARN(
+drm_WARN(T,
...)
|
-WARN_ON(
+drm_WARN_ON(T,
...)
|
-WARN_ONCE(
+drm_WARN_ONCE(T,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(T,
...)
)
...>
}
@rule3@
identifier func, T;
@@
func(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-WARN(
+drm_WARN(&T->drm,
...)
|
-WARN_ON(
+drm_WARN_ON(&T->drm,
...)
|
-WARN_ONCE(
+drm_WARN_ONCE(&T->drm,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(&T->drm,
...)
)
...+>
}
@rule4@
identifier func, T;
@@
func(struct drm_i915_private *T,...) {
<+...
(
-WARN(
+drm_WARN(&T->drm,
...)
|
-WARN_ON(
+drm_WARN_ON(&T->drm,
...)
|
-WARN_ONCE(
+drm_WARN_ONCE(&T->drm,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(&T->drm,
...)
)
...+>
}
Signed-off-by: Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200220165507.16823-6-pankaj.laxminarayan.bharadiya@intel.com
2020-02-20 16:55:04 +00:00
|
|
|
|
drm_WARN_ON(&dev_priv->drm,
|
|
|
|
|
intel_dp->source_rates || intel_dp->num_source_rates);
|
2017-03-28 14:59:04 +00:00
|
|
|
|
|
2023-04-13 21:24:35 +00:00
|
|
|
|
if (DISPLAY_VER(dev_priv) >= 14) {
|
|
|
|
|
source_rates = mtl_rates;
|
|
|
|
|
size = ARRAY_SIZE(mtl_rates);
|
|
|
|
|
max_rate = mtl_max_source_rate(intel_dp);
|
|
|
|
|
} else if (DISPLAY_VER(dev_priv) >= 11) {
|
2021-07-28 21:59:30 +00:00
|
|
|
|
source_rates = icl_rates;
|
|
|
|
|
size = ARRAY_SIZE(icl_rates);
|
2021-08-23 16:18:11 +00:00
|
|
|
|
if (IS_DG2(dev_priv))
|
|
|
|
|
max_rate = dg2_max_source_rate(intel_dp);
|
2021-09-01 16:04:02 +00:00
|
|
|
|
else if (IS_ALDERLAKE_P(dev_priv) || IS_ALDERLAKE_S(dev_priv) ||
|
|
|
|
|
IS_DG1(dev_priv) || IS_ROCKETLAKE(dev_priv))
|
2022-09-02 07:03:19 +00:00
|
|
|
|
max_rate = 810000;
|
2023-08-01 13:53:38 +00:00
|
|
|
|
else if (IS_JASPERLAKE(dev_priv) || IS_ELKHARTLAKE(dev_priv))
|
2020-10-05 17:54:47 +00:00
|
|
|
|
max_rate = ehl_max_source_rate(intel_dp);
|
2018-06-11 22:26:54 +00:00
|
|
|
|
else
|
|
|
|
|
max_rate = icl_max_source_rate(intel_dp);
|
2021-04-07 20:39:45 +00:00
|
|
|
|
} else if (IS_GEMINILAKE(dev_priv) || IS_BROXTON(dev_priv)) {
|
2018-02-27 03:11:15 +00:00
|
|
|
|
source_rates = bxt_rates;
|
|
|
|
|
size = ARRAY_SIZE(bxt_rates);
|
drm/i915/display: rename display version macros
While converting the rest of the driver to use GRAPHICS_VER() and
MEDIA_VER(), following what was done for display, some discussions went
back on what we did for display:
1) Why is the == comparison special that deserves a separate
macro instead of just getting the version and comparing directly
like is done for >, >=, <=?
2) IS_DISPLAY_RANGE() is weird in that it omits the "_VER" for
brevity. If we remove the current users of IS_DISPLAY_VER(), we
could actually repurpose it for a range check
With (1) there could be an advantage if we used gen_mask since multiple
conditionals be combined by the compiler in a single and instruction and
check the result. However a) INTEL_GEN() doesn't use the mask since it
would make the code bigger everywhere else and b) in the cases it made
sense, it also made sense to convert to the _RANGE() variant.
So here we repurpose IS_DISPLAY_VER() to work with a [ from, to ] range
like was the IS_DISPLAY_RANGE() and convert the current IS_DISPLAY_VER()
users to use == and != operators. Aside from the definition changes,
this was done by the following semantic patch:
@@ expression dev_priv, E1; @@
- !IS_DISPLAY_VER(dev_priv, E1)
+ DISPLAY_VER(dev_priv) != E1
@@ expression dev_priv, E1; @@
- IS_DISPLAY_VER(dev_priv, E1)
+ DISPLAY_VER(dev_priv) == E1
@@ expression dev_priv, from, until; @@
- IS_DISPLAY_RANGE(dev_priv, from, until)
+ IS_DISPLAY_VER(dev_priv, from, until)
Cc: Jani Nikula <jani.nikula@intel.com>
Cc: Matt Roper <matthew.d.roper@intel.com>
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
[Jani: Minor conflict resolve while applying.]
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20210413051002.92589-4-lucas.demarchi@intel.com
2021-04-13 05:09:53 +00:00
|
|
|
|
} else if (DISPLAY_VER(dev_priv) == 9) {
|
2017-03-28 14:59:04 +00:00
|
|
|
|
source_rates = skl_rates;
|
2016-10-26 23:25:55 +00:00
|
|
|
|
size = ARRAY_SIZE(skl_rates);
|
2023-08-01 13:53:31 +00:00
|
|
|
|
} else if ((IS_HASWELL(dev_priv) && !IS_HASWELL_ULX(dev_priv)) ||
|
2017-10-09 09:29:58 +00:00
|
|
|
|
IS_BROADWELL(dev_priv)) {
|
2018-02-27 10:59:11 +00:00
|
|
|
|
source_rates = hsw_rates;
|
|
|
|
|
size = ARRAY_SIZE(hsw_rates);
|
2017-10-09 09:29:58 +00:00
|
|
|
|
} else {
|
2018-02-27 10:59:11 +00:00
|
|
|
|
source_rates = g4x_rates;
|
|
|
|
|
size = ARRAY_SIZE(g4x_rates);
|
2016-10-26 23:25:55 +00:00
|
|
|
|
}
|
|
|
|
|
|
2022-06-02 20:57:23 +00:00
|
|
|
|
vbt_max_rate = vbt_max_link_rate(intel_dp);
|
2018-02-01 11:03:43 +00:00
|
|
|
|
if (max_rate && vbt_max_rate)
|
|
|
|
|
max_rate = min(max_rate, vbt_max_rate);
|
|
|
|
|
else if (vbt_max_rate)
|
|
|
|
|
max_rate = vbt_max_rate;
|
|
|
|
|
|
2018-02-01 11:03:42 +00:00
|
|
|
|
if (max_rate)
|
|
|
|
|
size = intel_dp_rate_limit_len(source_rates, size, max_rate);
|
|
|
|
|
|
2017-03-28 14:59:04 +00:00
|
|
|
|
intel_dp->source_rates = source_rates;
|
|
|
|
|
intel_dp->num_source_rates = size;
|
2016-10-26 23:25:55 +00:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static int intersect_rates(const int *source_rates, int source_len,
|
|
|
|
|
const int *sink_rates, int sink_len,
|
|
|
|
|
int *common_rates)
|
|
|
|
|
{
|
|
|
|
|
int i = 0, j = 0, k = 0;
|
|
|
|
|
|
|
|
|
|
while (i < source_len && j < sink_len) {
|
|
|
|
|
if (source_rates[i] == sink_rates[j]) {
|
|
|
|
|
if (WARN_ON(k >= DP_MAX_SUPPORTED_RATES))
|
|
|
|
|
return k;
|
|
|
|
|
common_rates[k] = source_rates[i];
|
|
|
|
|
++k;
|
|
|
|
|
++i;
|
|
|
|
|
++j;
|
|
|
|
|
} else if (source_rates[i] < sink_rates[j]) {
|
|
|
|
|
++i;
|
|
|
|
|
} else {
|
|
|
|
|
++j;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
return k;
|
|
|
|
|
}
|
|
|
|
|
|
2017-03-28 14:59:03 +00:00
|
|
|
|
/* return index of rate in rates array, or -1 if not found */
|
|
|
|
|
static int intel_dp_rate_index(const int *rates, int len, int rate)
|
|
|
|
|
{
|
|
|
|
|
int i;
|
|
|
|
|
|
|
|
|
|
for (i = 0; i < len; i++)
|
|
|
|
|
if (rate == rates[i])
|
|
|
|
|
return i;
|
|
|
|
|
|
|
|
|
|
return -1;
|
|
|
|
|
}
|
|
|
|
|
|
2017-04-06 13:44:10 +00:00
|
|
|
|
static void intel_dp_set_common_rates(struct intel_dp *intel_dp)
|
2016-10-26 23:25:55 +00:00
|
|
|
|
{
|
drm/i915/display/dp: Prefer drm_WARN* over WARN*
struct drm_device specific drm_WARN* macros include device information
in the backtrace, so we know what device the warnings originate from.
Prefer drm_WARN* over WARN* at places where struct intel_dp or struct
drm_i915_private pointer is available.
Conversion is done with below sementic patch:
@rule1@
identifier func, T;
@@
func(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-WARN_ON(
+drm_WARN_ON(&T->drm,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(&T->drm,
...)
)
...+>
}
@rule2@
identifier func, T;
@@
func(struct drm_i915_private *T,...) {
<+...
(
-WARN_ON(
+drm_WARN_ON(&T->drm,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(&T->drm,
...)
)
...+>
}
@rule3@
identifier func, T;
@@
func(struct intel_dp *T,...) {
+ struct drm_i915_private *i915 = dp_to_i915(T);
<+...
(
-WARN_ON(
+drm_WARN_ON(&i915->drm,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(&i915->drm,
...)
)
...+>
}
@rule4@
identifier func, T;
@@
func(...) {
...
struct intel_dp *T = ...;
+ struct drm_i915_private *i915 = dp_to_i915(T);
<+...
(
-WARN_ON(
+drm_WARN_ON(&i915->drm,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(&i915->drm,
...)
)
...+>
}
Signed-off-by: Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200504181600.18503-3-pankaj.laxminarayan.bharadiya@intel.com
2020-05-04 18:15:53 +00:00
|
|
|
|
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
|
|
|
|
|
|
|
|
|
|
drm_WARN_ON(&i915->drm,
|
|
|
|
|
!intel_dp->num_source_rates || !intel_dp->num_sink_rates);
|
2016-10-26 23:25:55 +00:00
|
|
|
|
|
2017-04-06 13:44:10 +00:00
|
|
|
|
intel_dp->num_common_rates = intersect_rates(intel_dp->source_rates,
|
|
|
|
|
intel_dp->num_source_rates,
|
|
|
|
|
intel_dp->sink_rates,
|
|
|
|
|
intel_dp->num_sink_rates,
|
|
|
|
|
intel_dp->common_rates);
|
|
|
|
|
|
|
|
|
|
/* Paranoia, there should always be something in common. */
|
drm/i915/display/dp: Prefer drm_WARN* over WARN*
struct drm_device specific drm_WARN* macros include device information
in the backtrace, so we know what device the warnings originate from.
Prefer drm_WARN* over WARN* at places where struct intel_dp or struct
drm_i915_private pointer is available.
Conversion is done with below sementic patch:
@rule1@
identifier func, T;
@@
func(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-WARN_ON(
+drm_WARN_ON(&T->drm,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(&T->drm,
...)
)
...+>
}
@rule2@
identifier func, T;
@@
func(struct drm_i915_private *T,...) {
<+...
(
-WARN_ON(
+drm_WARN_ON(&T->drm,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(&T->drm,
...)
)
...+>
}
@rule3@
identifier func, T;
@@
func(struct intel_dp *T,...) {
+ struct drm_i915_private *i915 = dp_to_i915(T);
<+...
(
-WARN_ON(
+drm_WARN_ON(&i915->drm,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(&i915->drm,
...)
)
...+>
}
@rule4@
identifier func, T;
@@
func(...) {
...
struct intel_dp *T = ...;
+ struct drm_i915_private *i915 = dp_to_i915(T);
<+...
(
-WARN_ON(
+drm_WARN_ON(&i915->drm,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(&i915->drm,
...)
)
...+>
}
Signed-off-by: Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200504181600.18503-3-pankaj.laxminarayan.bharadiya@intel.com
2020-05-04 18:15:53 +00:00
|
|
|
|
if (drm_WARN_ON(&i915->drm, intel_dp->num_common_rates == 0)) {
|
2018-02-27 10:59:11 +00:00
|
|
|
|
intel_dp->common_rates[0] = 162000;
|
2017-04-06 13:44:10 +00:00
|
|
|
|
intel_dp->num_common_rates = 1;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2017-06-08 20:41:02 +00:00
|
|
|
|
static bool intel_dp_link_params_valid(struct intel_dp *intel_dp, int link_rate,
|
2019-01-16 09:15:27 +00:00
|
|
|
|
u8 lane_count)
|
2017-04-06 21:00:12 +00:00
|
|
|
|
{
|
|
|
|
|
/*
|
|
|
|
|
* FIXME: we need to synchronize the current link parameters with
|
|
|
|
|
* hardware readout. Currently fast link training doesn't work on
|
|
|
|
|
* boot-up.
|
|
|
|
|
*/
|
2017-06-08 20:41:02 +00:00
|
|
|
|
if (link_rate == 0 ||
|
|
|
|
|
link_rate > intel_dp->max_link_rate)
|
2017-04-06 21:00:12 +00:00
|
|
|
|
return false;
|
|
|
|
|
|
2017-06-08 20:41:02 +00:00
|
|
|
|
if (lane_count == 0 ||
|
|
|
|
|
lane_count > intel_dp_max_lane_count(intel_dp))
|
2017-04-06 21:00:12 +00:00
|
|
|
|
return false;
|
|
|
|
|
|
|
|
|
|
return true;
|
|
|
|
|
}
|
|
|
|
|
|
2018-10-09 21:28:04 +00:00
|
|
|
|
static bool intel_dp_can_link_train_fallback_for_edp(struct intel_dp *intel_dp,
|
|
|
|
|
int link_rate,
|
2019-01-16 09:15:27 +00:00
|
|
|
|
u8 lane_count)
|
2018-10-09 21:28:04 +00:00
|
|
|
|
{
|
2022-03-11 17:24:20 +00:00
|
|
|
|
/* FIXME figure out what we actually want here */
|
2018-10-09 21:28:04 +00:00
|
|
|
|
const struct drm_display_mode *fixed_mode =
|
2022-03-11 17:24:20 +00:00
|
|
|
|
intel_panel_preferred_fixed_mode(intel_dp->attached_connector);
|
2018-10-09 21:28:04 +00:00
|
|
|
|
int mode_rate, max_rate;
|
|
|
|
|
|
|
|
|
|
mode_rate = intel_dp_link_required(fixed_mode->clock, 18);
|
|
|
|
|
max_rate = intel_dp_max_data_rate(link_rate, lane_count);
|
|
|
|
|
if (mode_rate > max_rate)
|
|
|
|
|
return false;
|
|
|
|
|
|
|
|
|
|
return true;
|
|
|
|
|
}
|
|
|
|
|
|
2016-12-09 03:05:12 +00:00
|
|
|
|
int intel_dp_get_link_train_fallback_values(struct intel_dp *intel_dp,
|
2019-01-16 09:15:27 +00:00
|
|
|
|
int link_rate, u8 lane_count)
|
2016-12-09 03:05:12 +00:00
|
|
|
|
{
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
|
2017-04-06 13:44:11 +00:00
|
|
|
|
int index;
|
2016-12-09 03:05:12 +00:00
|
|
|
|
|
2020-06-16 21:11:45 +00:00
|
|
|
|
/*
|
|
|
|
|
* TODO: Enable fallback on MST links once MST link compute can handle
|
|
|
|
|
* the fallback params.
|
|
|
|
|
*/
|
|
|
|
|
if (intel_dp->is_mst) {
|
|
|
|
|
drm_err(&i915->drm, "Link Training Unsuccessful\n");
|
|
|
|
|
return -1;
|
|
|
|
|
}
|
|
|
|
|
|
2021-01-07 18:20:25 +00:00
|
|
|
|
if (intel_dp_is_edp(intel_dp) && !intel_dp->use_max_params) {
|
|
|
|
|
drm_dbg_kms(&i915->drm,
|
|
|
|
|
"Retrying Link training for eDP with max parameters\n");
|
|
|
|
|
intel_dp->use_max_params = true;
|
|
|
|
|
return 0;
|
|
|
|
|
}
|
|
|
|
|
|
2017-04-06 13:44:11 +00:00
|
|
|
|
index = intel_dp_rate_index(intel_dp->common_rates,
|
|
|
|
|
intel_dp->num_common_rates,
|
|
|
|
|
link_rate);
|
|
|
|
|
if (index > 0) {
|
2018-10-09 21:28:04 +00:00
|
|
|
|
if (intel_dp_is_edp(intel_dp) &&
|
|
|
|
|
!intel_dp_can_link_train_fallback_for_edp(intel_dp,
|
2021-10-18 09:41:54 +00:00
|
|
|
|
intel_dp_common_rate(intel_dp, index - 1),
|
2018-10-09 21:28:04 +00:00
|
|
|
|
lane_count)) {
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm,
|
|
|
|
|
"Retrying Link training for eDP with same parameters\n");
|
2018-10-09 21:28:04 +00:00
|
|
|
|
return 0;
|
|
|
|
|
}
|
2021-10-18 09:41:54 +00:00
|
|
|
|
intel_dp->max_link_rate = intel_dp_common_rate(intel_dp, index - 1);
|
2017-04-06 13:44:12 +00:00
|
|
|
|
intel_dp->max_link_lane_count = lane_count;
|
2016-12-09 03:05:12 +00:00
|
|
|
|
} else if (lane_count > 1) {
|
2018-10-09 21:28:04 +00:00
|
|
|
|
if (intel_dp_is_edp(intel_dp) &&
|
|
|
|
|
!intel_dp_can_link_train_fallback_for_edp(intel_dp,
|
|
|
|
|
intel_dp_max_common_rate(intel_dp),
|
|
|
|
|
lane_count >> 1)) {
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm,
|
|
|
|
|
"Retrying Link training for eDP with same parameters\n");
|
2018-10-09 21:28:04 +00:00
|
|
|
|
return 0;
|
|
|
|
|
}
|
2017-04-06 13:44:13 +00:00
|
|
|
|
intel_dp->max_link_rate = intel_dp_max_common_rate(intel_dp);
|
2017-04-06 13:44:12 +00:00
|
|
|
|
intel_dp->max_link_lane_count = lane_count >> 1;
|
2016-12-09 03:05:12 +00:00
|
|
|
|
} else {
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
drm_err(&i915->drm, "Link Training Unsuccessful\n");
|
2016-12-09 03:05:12 +00:00
|
|
|
|
return -1;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
}
|
|
|
|
|
|
2019-09-25 08:21:09 +00:00
|
|
|
|
u32 intel_dp_mode_to_fec_clock(u32 mode_clock)
|
|
|
|
|
{
|
2023-10-24 01:09:06 +00:00
|
|
|
|
return div_u64(mul_u32_u32(mode_clock, DP_DSC_FEC_OVERHEAD_FACTOR),
|
|
|
|
|
1000000U);
|
2019-09-25 08:21:09 +00:00
|
|
|
|
}
|
|
|
|
|
|
2023-10-24 01:09:07 +00:00
|
|
|
|
int intel_dp_bw_fec_overhead(bool fec_enabled)
|
|
|
|
|
{
|
|
|
|
|
/*
|
|
|
|
|
* TODO: Calculate the actual overhead for a given mode.
|
|
|
|
|
* The hard-coded 1/0.972261=2.853% overhead factor
|
|
|
|
|
* corresponds (for instance) to the 8b/10b DP FEC 2.4% +
|
|
|
|
|
* 0.453% DSC overhead. This is enough for a 3840 width mode,
|
|
|
|
|
* which has a DSC overhead of up to ~0.2%, but may not be
|
|
|
|
|
* enough for a 1024 width mode where this is ~0.8% (on a 4
|
|
|
|
|
* lane DP link, with 2 DSC slices and 8 bpp color depth).
|
|
|
|
|
*/
|
|
|
|
|
return fec_enabled ? DP_DSC_FEC_OVERHEAD_FACTOR : 1000000;
|
2019-09-25 08:21:09 +00:00
|
|
|
|
}
|
|
|
|
|
|
2019-09-25 23:45:42 +00:00
|
|
|
|
static int
|
|
|
|
|
small_joiner_ram_size_bits(struct drm_i915_private *i915)
|
|
|
|
|
{
|
2021-08-05 10:19:37 +00:00
|
|
|
|
if (DISPLAY_VER(i915) >= 13)
|
|
|
|
|
return 17280 * 8;
|
|
|
|
|
else if (DISPLAY_VER(i915) >= 11)
|
2019-09-25 23:45:42 +00:00
|
|
|
|
return 7680 * 8;
|
|
|
|
|
else
|
|
|
|
|
return 6144 * 8;
|
|
|
|
|
}
|
|
|
|
|
|
2022-11-23 10:05:51 +00:00
|
|
|
|
u32 intel_dp_dsc_nearest_valid_bpp(struct drm_i915_private *i915, u32 bpp, u32 pipe_bpp)
|
|
|
|
|
{
|
|
|
|
|
u32 bits_per_pixel = bpp;
|
|
|
|
|
int i;
|
|
|
|
|
|
|
|
|
|
/* Error out if the max bpp is less than smallest allowed valid bpp */
|
|
|
|
|
if (bits_per_pixel < valid_dsc_bpp[0]) {
|
|
|
|
|
drm_dbg_kms(&i915->drm, "Unsupported BPP %u, min %u\n",
|
|
|
|
|
bits_per_pixel, valid_dsc_bpp[0]);
|
|
|
|
|
return 0;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/* From XE_LPD onwards we support from bpc upto uncompressed bpp-1 BPPs */
|
|
|
|
|
if (DISPLAY_VER(i915) >= 13) {
|
|
|
|
|
bits_per_pixel = min(bits_per_pixel, pipe_bpp - 1);
|
drm/i915: Ensure DSC has enough BW and stays within HW limits
We currently have an issue with some BPPs when using DSC.
According to the HW team, the reason is that a single VDSC engine
instance has some BW limitations that must be accounted for.
So, whenever we approach around 90% of the CDCLK, a second VDSC engine
has to be used.
This always means using two slices. However, in our current code,
the amount of slices is calculated independently of whether
we need to enable the second VDSC engine or not.
This leads to some logical issues when, according to the pixel clock needs,
we need to enable the second VDSC engine.
But as we calculated previously that we can only use a single slice,
we can't do that and fail.
So, we need to fix that so that the number of VDSC engines enabled
should depend on the number of slices, and the number of slices
should also depend on BW requirements.
Lastly, we didn't have BPP limitation for ADLP/MTL/DG2 implemented,
which says that DSC output BPPs can only be chosen within the range of 8 to 27
(BSpec 49259).
All of this applied together allows us to fix existing FIFO underruns,
which we have in many DSC tests.
v2: - Replace min with clamp_t(Jani Nikula)
- Fix commit message(Swati Sharma)
- Added "Closes"(Swati Sharma)
BSpec: 49259
HSDES: 18027167222
Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/8231
Signed-off-by: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
Reviewed-by: Vinod Govindapillai <vinod.govindapillai@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20230306080401.22552-1-stanislav.lisovskiy@intel.com
2023-03-06 08:04:01 +00:00
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
|
* According to BSpec, 27 is the max DSC output bpp,
|
2023-06-29 12:25:34 +00:00
|
|
|
|
* 8 is the min DSC output bpp.
|
|
|
|
|
* While we can still clamp higher bpp values to 27, saving bandwidth,
|
|
|
|
|
* if it is required to oompress up to bpp < 8, means we can't do
|
|
|
|
|
* that and probably means we can't fit the required mode, even with
|
|
|
|
|
* DSC enabled.
|
drm/i915: Ensure DSC has enough BW and stays within HW limits
We currently have an issue with some BPPs when using DSC.
According to the HW team, the reason is that a single VDSC engine
instance has some BW limitations that must be accounted for.
So, whenever we approach around 90% of the CDCLK, a second VDSC engine
has to be used.
This always means using two slices. However, in our current code,
the amount of slices is calculated independently of whether
we need to enable the second VDSC engine or not.
This leads to some logical issues when, according to the pixel clock needs,
we need to enable the second VDSC engine.
But as we calculated previously that we can only use a single slice,
we can't do that and fail.
So, we need to fix that so that the number of VDSC engines enabled
should depend on the number of slices, and the number of slices
should also depend on BW requirements.
Lastly, we didn't have BPP limitation for ADLP/MTL/DG2 implemented,
which says that DSC output BPPs can only be chosen within the range of 8 to 27
(BSpec 49259).
All of this applied together allows us to fix existing FIFO underruns,
which we have in many DSC tests.
v2: - Replace min with clamp_t(Jani Nikula)
- Fix commit message(Swati Sharma)
- Added "Closes"(Swati Sharma)
BSpec: 49259
HSDES: 18027167222
Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/8231
Signed-off-by: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
Reviewed-by: Vinod Govindapillai <vinod.govindapillai@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20230306080401.22552-1-stanislav.lisovskiy@intel.com
2023-03-06 08:04:01 +00:00
|
|
|
|
*/
|
2023-06-29 12:25:34 +00:00
|
|
|
|
if (bits_per_pixel < 8) {
|
|
|
|
|
drm_dbg_kms(&i915->drm, "Unsupported BPP %u, min 8\n",
|
|
|
|
|
bits_per_pixel);
|
|
|
|
|
return 0;
|
|
|
|
|
}
|
|
|
|
|
bits_per_pixel = min_t(u32, bits_per_pixel, 27);
|
2022-11-23 10:05:51 +00:00
|
|
|
|
} else {
|
|
|
|
|
/* Find the nearest match in the array of known BPPs from VESA */
|
|
|
|
|
for (i = 0; i < ARRAY_SIZE(valid_dsc_bpp) - 1; i++) {
|
|
|
|
|
if (bits_per_pixel < valid_dsc_bpp[i + 1])
|
|
|
|
|
break;
|
|
|
|
|
}
|
|
|
|
|
drm_dbg_kms(&i915->drm, "Set dsc bpp from %d to VESA %d\n",
|
|
|
|
|
bits_per_pixel, valid_dsc_bpp[i]);
|
|
|
|
|
|
|
|
|
|
bits_per_pixel = valid_dsc_bpp[i];
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
return bits_per_pixel;
|
|
|
|
|
}
|
|
|
|
|
|
2023-08-17 14:24:56 +00:00
|
|
|
|
static
|
|
|
|
|
u32 get_max_compressed_bpp_with_joiner(struct drm_i915_private *i915,
|
|
|
|
|
u32 mode_clock, u32 mode_hdisplay,
|
|
|
|
|
bool bigjoiner)
|
|
|
|
|
{
|
|
|
|
|
u32 max_bpp_small_joiner_ram;
|
|
|
|
|
|
|
|
|
|
/* Small Joiner Check: output bpp <= joiner RAM (bits) / Horiz. width */
|
|
|
|
|
max_bpp_small_joiner_ram = small_joiner_ram_size_bits(i915) / mode_hdisplay;
|
|
|
|
|
|
|
|
|
|
if (bigjoiner) {
|
|
|
|
|
int bigjoiner_interface_bits = DISPLAY_VER(i915) >= 14 ? 36 : 24;
|
|
|
|
|
/* With bigjoiner multiple dsc engines are used in parallel so PPC is 2 */
|
|
|
|
|
int ppc = 2;
|
|
|
|
|
u32 max_bpp_bigjoiner =
|
|
|
|
|
i915->display.cdclk.max_cdclk_freq * ppc * bigjoiner_interface_bits /
|
|
|
|
|
intel_dp_mode_to_fec_clock(mode_clock);
|
|
|
|
|
|
|
|
|
|
max_bpp_small_joiner_ram *= 2;
|
|
|
|
|
|
|
|
|
|
return min(max_bpp_small_joiner_ram, max_bpp_bigjoiner);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
return max_bpp_small_joiner_ram;
|
|
|
|
|
}
|
|
|
|
|
|
2023-08-17 14:24:45 +00:00
|
|
|
|
u16 intel_dp_dsc_get_max_compressed_bpp(struct drm_i915_private *i915,
|
|
|
|
|
u32 link_clock, u32 lane_count,
|
|
|
|
|
u32 mode_clock, u32 mode_hdisplay,
|
|
|
|
|
bool bigjoiner,
|
|
|
|
|
enum intel_output_format output_format,
|
|
|
|
|
u32 pipe_bpp,
|
|
|
|
|
u32 timeslots)
|
2019-09-25 08:21:09 +00:00
|
|
|
|
{
|
2023-08-17 14:24:56 +00:00
|
|
|
|
u32 bits_per_pixel, joiner_max_bpp;
|
2019-09-25 08:21:09 +00:00
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
|
* Available Link Bandwidth(Kbits/sec) = (NumberOfLanes)*
|
2023-01-09 14:02:10 +00:00
|
|
|
|
* (LinkSymbolClock)* 8 * (TimeSlots / 64)
|
|
|
|
|
* for SST -> TimeSlots is 64(i.e all TimeSlots that are available)
|
|
|
|
|
* for MST -> TimeSlots has to be calculated, based on mode requirements
|
2023-02-23 11:55:09 +00:00
|
|
|
|
*
|
|
|
|
|
* Due to FEC overhead, the available bw is reduced to 97.2261%.
|
|
|
|
|
* To support the given mode:
|
|
|
|
|
* Bandwidth required should be <= Available link Bandwidth * FEC Overhead
|
|
|
|
|
* =>ModeClock * bits_per_pixel <= Available Link Bandwidth * FEC Overhead
|
|
|
|
|
* =>bits_per_pixel <= Available link Bandwidth * FEC Overhead / ModeClock
|
|
|
|
|
* =>bits_per_pixel <= (NumberOfLanes * LinkSymbolClock) * 8 (TimeSlots / 64) /
|
|
|
|
|
* (ModeClock / FEC Overhead)
|
|
|
|
|
* =>bits_per_pixel <= (NumberOfLanes * LinkSymbolClock * TimeSlots) /
|
|
|
|
|
* (ModeClock / FEC Overhead * 8)
|
2019-09-25 08:21:09 +00:00
|
|
|
|
*/
|
2023-02-23 11:55:09 +00:00
|
|
|
|
bits_per_pixel = ((link_clock * lane_count) * timeslots) /
|
|
|
|
|
(intel_dp_mode_to_fec_clock(mode_clock) * 8);
|
2022-11-23 10:07:18 +00:00
|
|
|
|
|
2023-08-17 14:24:42 +00:00
|
|
|
|
/* Bandwidth required for 420 is half, that of 444 format */
|
|
|
|
|
if (output_format == INTEL_OUTPUT_FORMAT_YCBCR420)
|
|
|
|
|
bits_per_pixel *= 2;
|
|
|
|
|
|
2023-08-17 14:24:43 +00:00
|
|
|
|
/*
|
|
|
|
|
* According to DSC 1.2a Section 4.1.1 Table 4.1 the maximum
|
|
|
|
|
* supported PPS value can be 63.9375 and with the further
|
|
|
|
|
* mention that for 420, 422 formats, bpp should be programmed double
|
|
|
|
|
* the target bpp restricting our target bpp to be 31.9375 at max.
|
|
|
|
|
*/
|
|
|
|
|
if (output_format == INTEL_OUTPUT_FORMAT_YCBCR420)
|
|
|
|
|
bits_per_pixel = min_t(u32, bits_per_pixel, 31);
|
|
|
|
|
|
2022-11-23 10:07:18 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm, "Max link bpp is %u for %u timeslots "
|
|
|
|
|
"total bw %u pixel clock %u\n",
|
|
|
|
|
bits_per_pixel, timeslots,
|
|
|
|
|
(link_clock * lane_count * 8),
|
|
|
|
|
intel_dp_mode_to_fec_clock(mode_clock));
|
2019-09-25 08:21:09 +00:00
|
|
|
|
|
2023-08-17 14:24:56 +00:00
|
|
|
|
joiner_max_bpp = get_max_compressed_bpp_with_joiner(i915, mode_clock,
|
|
|
|
|
mode_hdisplay, bigjoiner);
|
|
|
|
|
bits_per_pixel = min(bits_per_pixel, joiner_max_bpp);
|
2020-11-17 19:47:05 +00:00
|
|
|
|
|
2022-11-23 10:05:51 +00:00
|
|
|
|
bits_per_pixel = intel_dp_dsc_nearest_valid_bpp(i915, bits_per_pixel, pipe_bpp);
|
2019-09-25 08:21:09 +00:00
|
|
|
|
|
2023-08-17 14:24:52 +00:00
|
|
|
|
return bits_per_pixel;
|
2019-09-25 08:21:09 +00:00
|
|
|
|
}
|
|
|
|
|
|
2023-10-06 13:37:23 +00:00
|
|
|
|
u8 intel_dp_dsc_get_slice_count(const struct intel_connector *connector,
|
2022-11-01 09:42:20 +00:00
|
|
|
|
int mode_clock, int mode_hdisplay,
|
|
|
|
|
bool bigjoiner)
|
2019-09-25 08:21:09 +00:00
|
|
|
|
{
|
2023-10-06 13:37:23 +00:00
|
|
|
|
struct drm_i915_private *i915 = to_i915(connector->base.dev);
|
2019-09-25 08:21:09 +00:00
|
|
|
|
u8 min_slice_count, i;
|
|
|
|
|
int max_slice_width;
|
|
|
|
|
|
|
|
|
|
if (mode_clock <= DP_DSC_PEAK_PIXEL_RATE)
|
|
|
|
|
min_slice_count = DIV_ROUND_UP(mode_clock,
|
|
|
|
|
DP_DSC_MAX_ENC_THROUGHPUT_0);
|
|
|
|
|
else
|
|
|
|
|
min_slice_count = DIV_ROUND_UP(mode_clock,
|
|
|
|
|
DP_DSC_MAX_ENC_THROUGHPUT_1);
|
|
|
|
|
|
drm/i915: Ensure DSC has enough BW and stays within HW limits
We currently have an issue with some BPPs when using DSC.
According to the HW team, the reason is that a single VDSC engine
instance has some BW limitations that must be accounted for.
So, whenever we approach around 90% of the CDCLK, a second VDSC engine
has to be used.
This always means using two slices. However, in our current code,
the amount of slices is calculated independently of whether
we need to enable the second VDSC engine or not.
This leads to some logical issues when, according to the pixel clock needs,
we need to enable the second VDSC engine.
But as we calculated previously that we can only use a single slice,
we can't do that and fail.
So, we need to fix that so that the number of VDSC engines enabled
should depend on the number of slices, and the number of slices
should also depend on BW requirements.
Lastly, we didn't have BPP limitation for ADLP/MTL/DG2 implemented,
which says that DSC output BPPs can only be chosen within the range of 8 to 27
(BSpec 49259).
All of this applied together allows us to fix existing FIFO underruns,
which we have in many DSC tests.
v2: - Replace min with clamp_t(Jani Nikula)
- Fix commit message(Swati Sharma)
- Added "Closes"(Swati Sharma)
BSpec: 49259
HSDES: 18027167222
Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/8231
Signed-off-by: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
Reviewed-by: Vinod Govindapillai <vinod.govindapillai@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20230306080401.22552-1-stanislav.lisovskiy@intel.com
2023-03-06 08:04:01 +00:00
|
|
|
|
/*
|
|
|
|
|
* Due to some DSC engine BW limitations, we need to enable second
|
|
|
|
|
* slice and VDSC engine, whenever we approach close enough to max CDCLK
|
|
|
|
|
*/
|
|
|
|
|
if (mode_clock >= ((i915->display.cdclk.max_cdclk_freq * 85) / 100))
|
|
|
|
|
min_slice_count = max_t(u8, min_slice_count, 2);
|
|
|
|
|
|
2023-10-06 13:37:23 +00:00
|
|
|
|
max_slice_width = drm_dp_dsc_sink_max_slice_width(connector->dp.dsc_dpcd);
|
2019-09-25 08:21:09 +00:00
|
|
|
|
if (max_slice_width < DP_DSC_MIN_SLICE_WIDTH_VALUE) {
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm,
|
|
|
|
|
"Unsupported slice width %d by DP DSC Sink device\n",
|
|
|
|
|
max_slice_width);
|
2019-09-25 08:21:09 +00:00
|
|
|
|
return 0;
|
|
|
|
|
}
|
|
|
|
|
/* Also take into account max slice width */
|
2020-12-04 20:58:04 +00:00
|
|
|
|
min_slice_count = max_t(u8, min_slice_count,
|
2019-09-25 08:21:09 +00:00
|
|
|
|
DIV_ROUND_UP(mode_hdisplay,
|
|
|
|
|
max_slice_width));
|
|
|
|
|
|
|
|
|
|
/* Find the closest match to the valid slice count values */
|
|
|
|
|
for (i = 0; i < ARRAY_SIZE(valid_dsc_slicecount); i++) {
|
2020-11-17 19:47:05 +00:00
|
|
|
|
u8 test_slice_count = valid_dsc_slicecount[i] << bigjoiner;
|
|
|
|
|
|
|
|
|
|
if (test_slice_count >
|
2023-10-06 13:37:23 +00:00
|
|
|
|
drm_dp_dsc_sink_max_slice_count(connector->dp.dsc_dpcd, false))
|
2019-09-25 08:21:09 +00:00
|
|
|
|
break;
|
2020-11-17 19:47:05 +00:00
|
|
|
|
|
|
|
|
|
/* big joiner needs small joiner to be enabled */
|
|
|
|
|
if (bigjoiner && test_slice_count < 4)
|
|
|
|
|
continue;
|
|
|
|
|
|
|
|
|
|
if (min_slice_count <= test_slice_count)
|
|
|
|
|
return test_slice_count;
|
2019-09-25 08:21:09 +00:00
|
|
|
|
}
|
|
|
|
|
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm, "Unsupported Slice Count %d\n",
|
|
|
|
|
min_slice_count);
|
2019-09-25 08:21:09 +00:00
|
|
|
|
return 0;
|
|
|
|
|
}
|
|
|
|
|
|
2023-04-27 12:56:01 +00:00
|
|
|
|
static bool source_can_output(struct intel_dp *intel_dp,
|
|
|
|
|
enum intel_output_format format)
|
|
|
|
|
{
|
|
|
|
|
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
|
|
|
|
|
|
|
|
|
|
switch (format) {
|
|
|
|
|
case INTEL_OUTPUT_FORMAT_RGB:
|
|
|
|
|
return true;
|
|
|
|
|
|
|
|
|
|
case INTEL_OUTPUT_FORMAT_YCBCR444:
|
|
|
|
|
/*
|
|
|
|
|
* No YCbCr output support on gmch platforms.
|
|
|
|
|
* Also, ILK doesn't seem capable of DP YCbCr output.
|
|
|
|
|
* The displayed image is severly corrupted. SNB+ is fine.
|
|
|
|
|
*/
|
|
|
|
|
return !HAS_GMCH(i915) && !IS_IRONLAKE(i915);
|
|
|
|
|
|
|
|
|
|
case INTEL_OUTPUT_FORMAT_YCBCR420:
|
|
|
|
|
/* Platform < Gen 11 cannot output YCbCr420 format */
|
|
|
|
|
return DISPLAY_VER(i915) >= 11;
|
|
|
|
|
|
|
|
|
|
default:
|
|
|
|
|
MISSING_CASE(format);
|
|
|
|
|
return false;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static bool
|
|
|
|
|
dfp_can_convert_from_rgb(struct intel_dp *intel_dp,
|
|
|
|
|
enum intel_output_format sink_format)
|
|
|
|
|
{
|
|
|
|
|
if (!drm_dp_is_branch(intel_dp->dpcd))
|
|
|
|
|
return false;
|
|
|
|
|
|
|
|
|
|
if (sink_format == INTEL_OUTPUT_FORMAT_YCBCR444)
|
|
|
|
|
return intel_dp->dfp.rgb_to_ycbcr;
|
|
|
|
|
|
|
|
|
|
if (sink_format == INTEL_OUTPUT_FORMAT_YCBCR420)
|
|
|
|
|
return intel_dp->dfp.rgb_to_ycbcr &&
|
|
|
|
|
intel_dp->dfp.ycbcr_444_to_420;
|
|
|
|
|
|
|
|
|
|
return false;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static bool
|
|
|
|
|
dfp_can_convert_from_ycbcr444(struct intel_dp *intel_dp,
|
|
|
|
|
enum intel_output_format sink_format)
|
|
|
|
|
{
|
|
|
|
|
if (!drm_dp_is_branch(intel_dp->dpcd))
|
|
|
|
|
return false;
|
|
|
|
|
|
|
|
|
|
if (sink_format == INTEL_OUTPUT_FORMAT_YCBCR420)
|
|
|
|
|
return intel_dp->dfp.ycbcr_444_to_420;
|
|
|
|
|
|
|
|
|
|
return false;
|
|
|
|
|
}
|
|
|
|
|
|
2023-08-17 14:24:59 +00:00
|
|
|
|
static bool
|
|
|
|
|
dfp_can_convert(struct intel_dp *intel_dp,
|
|
|
|
|
enum intel_output_format output_format,
|
|
|
|
|
enum intel_output_format sink_format)
|
|
|
|
|
{
|
|
|
|
|
switch (output_format) {
|
|
|
|
|
case INTEL_OUTPUT_FORMAT_RGB:
|
|
|
|
|
return dfp_can_convert_from_rgb(intel_dp, sink_format);
|
|
|
|
|
case INTEL_OUTPUT_FORMAT_YCBCR444:
|
|
|
|
|
return dfp_can_convert_from_ycbcr444(intel_dp, sink_format);
|
|
|
|
|
default:
|
|
|
|
|
MISSING_CASE(output_format);
|
|
|
|
|
return false;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
return false;
|
|
|
|
|
}
|
|
|
|
|
|
2020-09-17 21:43:33 +00:00
|
|
|
|
static enum intel_output_format
|
2022-03-22 12:00:09 +00:00
|
|
|
|
intel_dp_output_format(struct intel_connector *connector,
|
2023-04-27 12:56:00 +00:00
|
|
|
|
enum intel_output_format sink_format)
|
2020-09-17 21:43:33 +00:00
|
|
|
|
{
|
2022-03-22 12:00:09 +00:00
|
|
|
|
struct intel_dp *intel_dp = intel_attached_dp(connector);
|
2023-04-27 12:56:01 +00:00
|
|
|
|
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
|
2023-08-17 14:24:59 +00:00
|
|
|
|
enum intel_output_format force_dsc_output_format =
|
|
|
|
|
intel_dp->force_dsc_output_format;
|
2023-04-27 12:56:01 +00:00
|
|
|
|
enum intel_output_format output_format;
|
2023-08-17 14:24:59 +00:00
|
|
|
|
if (force_dsc_output_format) {
|
|
|
|
|
if (source_can_output(intel_dp, force_dsc_output_format) &&
|
|
|
|
|
(!drm_dp_is_branch(intel_dp->dpcd) ||
|
|
|
|
|
sink_format != force_dsc_output_format ||
|
|
|
|
|
dfp_can_convert(intel_dp, force_dsc_output_format, sink_format)))
|
|
|
|
|
return force_dsc_output_format;
|
2020-09-17 21:43:33 +00:00
|
|
|
|
|
2023-08-17 14:24:59 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm, "Cannot force DSC output format\n");
|
|
|
|
|
}
|
2023-03-09 06:28:55 +00:00
|
|
|
|
|
2023-04-27 12:56:01 +00:00
|
|
|
|
if (sink_format == INTEL_OUTPUT_FORMAT_RGB ||
|
|
|
|
|
dfp_can_convert_from_rgb(intel_dp, sink_format))
|
|
|
|
|
output_format = INTEL_OUTPUT_FORMAT_RGB;
|
2020-09-17 21:43:33 +00:00
|
|
|
|
|
2023-04-27 12:56:01 +00:00
|
|
|
|
else if (sink_format == INTEL_OUTPUT_FORMAT_YCBCR444 ||
|
|
|
|
|
dfp_can_convert_from_ycbcr444(intel_dp, sink_format))
|
|
|
|
|
output_format = INTEL_OUTPUT_FORMAT_YCBCR444;
|
2020-12-18 10:37:23 +00:00
|
|
|
|
|
2020-09-17 21:43:33 +00:00
|
|
|
|
else
|
2023-04-27 12:56:01 +00:00
|
|
|
|
output_format = INTEL_OUTPUT_FORMAT_YCBCR420;
|
|
|
|
|
|
|
|
|
|
drm_WARN_ON(&i915->drm, !source_can_output(intel_dp, output_format));
|
|
|
|
|
|
|
|
|
|
return output_format;
|
2020-09-17 21:43:33 +00:00
|
|
|
|
}
|
|
|
|
|
|
2020-09-17 21:43:35 +00:00
|
|
|
|
int intel_dp_min_bpp(enum intel_output_format output_format)
|
|
|
|
|
{
|
|
|
|
|
if (output_format == INTEL_OUTPUT_FORMAT_RGB)
|
|
|
|
|
return 6 * 3;
|
|
|
|
|
else
|
|
|
|
|
return 8 * 3;
|
|
|
|
|
}
|
|
|
|
|
|
2023-08-17 14:24:44 +00:00
|
|
|
|
int intel_dp_output_bpp(enum intel_output_format output_format, int bpp)
|
2020-09-17 21:43:35 +00:00
|
|
|
|
{
|
|
|
|
|
/*
|
|
|
|
|
* bpp value was assumed to RGB format. And YCbCr 4:2:0 output
|
|
|
|
|
* format of the number of bytes per pixel will be half the number
|
|
|
|
|
* of bytes of RGB pixel.
|
|
|
|
|
*/
|
|
|
|
|
if (output_format == INTEL_OUTPUT_FORMAT_YCBCR420)
|
|
|
|
|
bpp /= 2;
|
|
|
|
|
|
|
|
|
|
return bpp;
|
|
|
|
|
}
|
|
|
|
|
|
2023-04-27 12:56:04 +00:00
|
|
|
|
static enum intel_output_format
|
|
|
|
|
intel_dp_sink_format(struct intel_connector *connector,
|
|
|
|
|
const struct drm_display_mode *mode)
|
|
|
|
|
{
|
|
|
|
|
const struct drm_display_info *info = &connector->base.display_info;
|
|
|
|
|
|
|
|
|
|
if (drm_mode_is_420_only(info, mode))
|
|
|
|
|
return INTEL_OUTPUT_FORMAT_YCBCR420;
|
|
|
|
|
|
|
|
|
|
return INTEL_OUTPUT_FORMAT_RGB;
|
|
|
|
|
}
|
|
|
|
|
|
2020-09-17 21:43:35 +00:00
|
|
|
|
static int
|
2022-03-22 12:00:09 +00:00
|
|
|
|
intel_dp_mode_min_output_bpp(struct intel_connector *connector,
|
2020-09-17 21:43:35 +00:00
|
|
|
|
const struct drm_display_mode *mode)
|
|
|
|
|
{
|
2023-04-27 12:56:00 +00:00
|
|
|
|
enum intel_output_format output_format, sink_format;
|
|
|
|
|
|
2023-04-27 12:56:04 +00:00
|
|
|
|
sink_format = intel_dp_sink_format(connector, mode);
|
2023-04-27 12:56:00 +00:00
|
|
|
|
|
|
|
|
|
output_format = intel_dp_output_format(connector, sink_format);
|
2020-09-17 21:43:35 +00:00
|
|
|
|
|
|
|
|
|
return intel_dp_output_bpp(output_format, intel_dp_min_bpp(output_format));
|
|
|
|
|
}
|
|
|
|
|
|
2019-07-18 14:43:39 +00:00
|
|
|
|
static bool intel_dp_hdisplay_bad(struct drm_i915_private *dev_priv,
|
|
|
|
|
int hdisplay)
|
|
|
|
|
{
|
|
|
|
|
/*
|
|
|
|
|
* Older platforms don't like hdisplay==4096 with DP.
|
|
|
|
|
*
|
|
|
|
|
* On ILK/SNB/IVB the pipe seems to be somewhat running (scanline
|
|
|
|
|
* and frame counter increment), but we don't get vblank interrupts,
|
|
|
|
|
* and the pipe underruns immediately. The link also doesn't seem
|
|
|
|
|
* to get trained properly.
|
|
|
|
|
*
|
|
|
|
|
* On CHV the vblank interrupts don't seem to disappear but
|
|
|
|
|
* otherwise the symptoms are similar.
|
|
|
|
|
*
|
|
|
|
|
* TODO: confirm the behaviour on HSW+
|
|
|
|
|
*/
|
|
|
|
|
return hdisplay == 4096 && !HAS_DDI(dev_priv);
|
|
|
|
|
}
|
|
|
|
|
|
2022-03-22 12:00:05 +00:00
|
|
|
|
static int intel_dp_max_tmds_clock(struct intel_dp *intel_dp)
|
|
|
|
|
{
|
|
|
|
|
struct intel_connector *connector = intel_dp->attached_connector;
|
|
|
|
|
const struct drm_display_info *info = &connector->base.display_info;
|
|
|
|
|
int max_tmds_clock = intel_dp->dfp.max_tmds_clock;
|
|
|
|
|
|
|
|
|
|
/* Only consider the sink's max TMDS clock if we know this is a HDMI DFP */
|
|
|
|
|
if (max_tmds_clock && info->max_tmds_clock)
|
|
|
|
|
max_tmds_clock = min(max_tmds_clock, info->max_tmds_clock);
|
|
|
|
|
|
|
|
|
|
return max_tmds_clock;
|
|
|
|
|
}
|
|
|
|
|
|
2022-03-22 12:00:04 +00:00
|
|
|
|
static enum drm_mode_status
|
|
|
|
|
intel_dp_tmds_clock_valid(struct intel_dp *intel_dp,
|
2023-04-27 12:56:03 +00:00
|
|
|
|
int clock, int bpc,
|
|
|
|
|
enum intel_output_format sink_format,
|
2022-03-22 12:00:13 +00:00
|
|
|
|
bool respect_downstream_limits)
|
2022-03-22 12:00:04 +00:00
|
|
|
|
{
|
2022-03-22 12:00:05 +00:00
|
|
|
|
int tmds_clock, min_tmds_clock, max_tmds_clock;
|
2022-03-22 12:00:04 +00:00
|
|
|
|
|
2022-03-22 12:00:13 +00:00
|
|
|
|
if (!respect_downstream_limits)
|
|
|
|
|
return MODE_OK;
|
|
|
|
|
|
2023-04-27 12:56:03 +00:00
|
|
|
|
tmds_clock = intel_hdmi_tmds_clock(clock, bpc, sink_format);
|
2022-03-22 12:00:04 +00:00
|
|
|
|
|
2022-03-22 12:00:05 +00:00
|
|
|
|
min_tmds_clock = intel_dp->dfp.min_tmds_clock;
|
|
|
|
|
max_tmds_clock = intel_dp_max_tmds_clock(intel_dp);
|
|
|
|
|
|
|
|
|
|
if (min_tmds_clock && tmds_clock < min_tmds_clock)
|
2022-03-22 12:00:04 +00:00
|
|
|
|
return MODE_CLOCK_LOW;
|
|
|
|
|
|
2022-03-22 12:00:05 +00:00
|
|
|
|
if (max_tmds_clock && tmds_clock > max_tmds_clock)
|
2022-03-22 12:00:04 +00:00
|
|
|
|
return MODE_CLOCK_HIGH;
|
|
|
|
|
|
|
|
|
|
return MODE_OK;
|
|
|
|
|
}
|
|
|
|
|
|
2020-09-04 11:53:45 +00:00
|
|
|
|
static enum drm_mode_status
|
|
|
|
|
intel_dp_mode_valid_downstream(struct intel_connector *connector,
|
2020-09-04 11:53:47 +00:00
|
|
|
|
const struct drm_display_mode *mode,
|
2020-09-04 11:53:45 +00:00
|
|
|
|
int target_clock)
|
|
|
|
|
{
|
|
|
|
|
struct intel_dp *intel_dp = intel_attached_dp(connector);
|
2020-09-04 11:53:47 +00:00
|
|
|
|
const struct drm_display_info *info = &connector->base.display_info;
|
2022-03-22 12:00:12 +00:00
|
|
|
|
enum drm_mode_status status;
|
2023-04-27 12:56:03 +00:00
|
|
|
|
enum intel_output_format sink_format;
|
2020-09-04 11:53:45 +00:00
|
|
|
|
|
2020-12-18 10:37:16 +00:00
|
|
|
|
/* If PCON supports FRL MODE, check FRL bandwidth constraints */
|
|
|
|
|
if (intel_dp->dfp.pcon_max_frl_bw) {
|
|
|
|
|
int target_bw;
|
|
|
|
|
int max_frl_bw;
|
2022-03-22 12:00:09 +00:00
|
|
|
|
int bpp = intel_dp_mode_min_output_bpp(connector, mode);
|
2020-12-18 10:37:16 +00:00
|
|
|
|
|
|
|
|
|
target_bw = bpp * target_clock;
|
|
|
|
|
|
|
|
|
|
max_frl_bw = intel_dp->dfp.pcon_max_frl_bw;
|
|
|
|
|
|
|
|
|
|
/* converting bw from Gbps to Kbps*/
|
|
|
|
|
max_frl_bw = max_frl_bw * 1000000;
|
|
|
|
|
|
|
|
|
|
if (target_bw > max_frl_bw)
|
|
|
|
|
return MODE_CLOCK_HIGH;
|
|
|
|
|
|
|
|
|
|
return MODE_OK;
|
|
|
|
|
}
|
|
|
|
|
|
2020-09-04 11:53:45 +00:00
|
|
|
|
if (intel_dp->dfp.max_dotclock &&
|
|
|
|
|
target_clock > intel_dp->dfp.max_dotclock)
|
|
|
|
|
return MODE_CLOCK_HIGH;
|
|
|
|
|
|
2023-04-27 12:56:04 +00:00
|
|
|
|
sink_format = intel_dp_sink_format(connector, mode);
|
2023-04-27 12:56:03 +00:00
|
|
|
|
|
2020-09-04 11:53:47 +00:00
|
|
|
|
/* Assume 8bpc for the DP++/HDMI/DVI TMDS clock check */
|
2022-03-22 12:00:12 +00:00
|
|
|
|
status = intel_dp_tmds_clock_valid(intel_dp, target_clock,
|
2023-04-27 12:56:03 +00:00
|
|
|
|
8, sink_format, true);
|
2022-03-22 12:00:12 +00:00
|
|
|
|
|
|
|
|
|
if (status != MODE_OK) {
|
2023-04-27 12:56:03 +00:00
|
|
|
|
if (sink_format == INTEL_OUTPUT_FORMAT_YCBCR420 ||
|
2022-03-22 12:00:12 +00:00
|
|
|
|
!connector->base.ycbcr_420_allowed ||
|
|
|
|
|
!drm_mode_is_420_also(info, mode))
|
|
|
|
|
return status;
|
2023-04-27 12:56:03 +00:00
|
|
|
|
sink_format = INTEL_OUTPUT_FORMAT_YCBCR420;
|
2022-03-22 12:00:12 +00:00
|
|
|
|
status = intel_dp_tmds_clock_valid(intel_dp, target_clock,
|
2023-04-27 12:56:03 +00:00
|
|
|
|
8, sink_format, true);
|
2022-03-22 12:00:12 +00:00
|
|
|
|
if (status != MODE_OK)
|
|
|
|
|
return status;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
return MODE_OK;
|
2020-09-04 11:53:45 +00:00
|
|
|
|
}
|
|
|
|
|
|
2022-11-01 09:42:20 +00:00
|
|
|
|
bool intel_dp_need_bigjoiner(struct intel_dp *intel_dp,
|
|
|
|
|
int hdisplay, int clock)
|
2021-09-13 14:44:27 +00:00
|
|
|
|
{
|
|
|
|
|
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
|
|
|
|
|
|
|
|
|
|
if (!intel_dp_can_bigjoiner(intel_dp))
|
|
|
|
|
return false;
|
|
|
|
|
|
|
|
|
|
return clock > i915->max_dotclk_freq || hdisplay > 5120;
|
|
|
|
|
}
|
|
|
|
|
|
2013-11-28 15:29:18 +00:00
|
|
|
|
static enum drm_mode_status
|
2022-03-22 12:00:09 +00:00
|
|
|
|
intel_dp_mode_valid(struct drm_connector *_connector,
|
2009-04-07 23:16:42 +00:00
|
|
|
|
struct drm_display_mode *mode)
|
|
|
|
|
{
|
2022-03-22 12:00:09 +00:00
|
|
|
|
struct intel_connector *connector = to_intel_connector(_connector);
|
|
|
|
|
struct intel_dp *intel_dp = intel_attached_dp(connector);
|
|
|
|
|
struct drm_i915_private *dev_priv = to_i915(connector->base.dev);
|
2022-03-11 17:24:18 +00:00
|
|
|
|
const struct drm_display_mode *fixed_mode;
|
2013-03-26 23:44:59 +00:00
|
|
|
|
int target_clock = mode->clock;
|
|
|
|
|
int max_rate, mode_rate, max_lanes, max_link_clock;
|
2020-09-04 11:53:45 +00:00
|
|
|
|
int max_dotclk = dev_priv->max_dotclk_freq;
|
2023-08-17 14:24:45 +00:00
|
|
|
|
u16 dsc_max_compressed_bpp = 0;
|
2018-10-31 00:19:22 +00:00
|
|
|
|
u8 dsc_slice_count = 0;
|
2020-09-04 11:53:45 +00:00
|
|
|
|
enum drm_mode_status status;
|
2020-11-17 19:47:05 +00:00
|
|
|
|
bool dsc = false, bigjoiner = false;
|
2016-09-09 11:10:55 +00:00
|
|
|
|
|
2023-11-27 14:50:25 +00:00
|
|
|
|
status = intel_cpu_transcoder_mode_valid(dev_priv, mode);
|
|
|
|
|
if (status != MODE_OK)
|
|
|
|
|
return status;
|
|
|
|
|
|
2020-11-12 02:39:49 +00:00
|
|
|
|
if (mode->flags & DRM_MODE_FLAG_DBLCLK)
|
|
|
|
|
return MODE_H_ILLEGAL;
|
|
|
|
|
|
2022-03-22 12:00:09 +00:00
|
|
|
|
fixed_mode = intel_panel_fixed_mode(connector, mode);
|
2017-08-18 09:30:20 +00:00
|
|
|
|
if (intel_dp_is_edp(intel_dp) && fixed_mode) {
|
2022-03-22 12:00:09 +00:00
|
|
|
|
status = intel_panel_mode_valid(connector, mode);
|
2021-09-23 20:01:04 +00:00
|
|
|
|
if (status != MODE_OK)
|
|
|
|
|
return status;
|
2013-04-02 21:42:31 +00:00
|
|
|
|
|
|
|
|
|
target_clock = fixed_mode->clock;
|
2010-07-19 08:43:14 +00:00
|
|
|
|
}
|
|
|
|
|
|
2020-11-12 02:39:49 +00:00
|
|
|
|
if (mode->clock < 10000)
|
|
|
|
|
return MODE_CLOCK_LOW;
|
|
|
|
|
|
2021-09-13 14:44:27 +00:00
|
|
|
|
if (intel_dp_need_bigjoiner(intel_dp, mode->hdisplay, target_clock)) {
|
2020-11-17 19:47:05 +00:00
|
|
|
|
bigjoiner = true;
|
|
|
|
|
max_dotclk *= 2;
|
|
|
|
|
}
|
|
|
|
|
if (target_clock > max_dotclk)
|
|
|
|
|
return MODE_CLOCK_HIGH;
|
|
|
|
|
|
2023-04-27 12:56:05 +00:00
|
|
|
|
if (intel_dp_hdisplay_bad(dev_priv, mode->hdisplay))
|
|
|
|
|
return MODE_H_ILLEGAL;
|
|
|
|
|
|
2015-03-12 15:10:34 +00:00
|
|
|
|
max_link_clock = intel_dp_max_link_rate(intel_dp);
|
2014-05-06 11:56:50 +00:00
|
|
|
|
max_lanes = intel_dp_max_lane_count(intel_dp);
|
2013-03-26 23:44:59 +00:00
|
|
|
|
|
|
|
|
|
max_rate = intel_dp_max_data_rate(max_link_clock, max_lanes);
|
2020-09-17 21:43:35 +00:00
|
|
|
|
mode_rate = intel_dp_link_required(target_clock,
|
|
|
|
|
intel_dp_mode_min_output_bpp(connector, mode));
|
2013-03-26 23:44:59 +00:00
|
|
|
|
|
2022-11-10 09:33:12 +00:00
|
|
|
|
if (HAS_DSC(dev_priv) &&
|
2023-10-06 13:37:24 +00:00
|
|
|
|
drm_dp_sink_supports_dsc(connector->dp.dsc_dpcd)) {
|
2023-08-17 14:24:42 +00:00
|
|
|
|
enum intel_output_format sink_format, output_format;
|
|
|
|
|
int pipe_bpp;
|
|
|
|
|
|
|
|
|
|
sink_format = intel_dp_sink_format(connector, mode);
|
|
|
|
|
output_format = intel_dp_output_format(connector, sink_format);
|
2021-05-14 15:36:56 +00:00
|
|
|
|
/*
|
|
|
|
|
* TBD pass the connector BPC,
|
|
|
|
|
* for now U8_MAX so that max BPC on that platform would be picked
|
|
|
|
|
*/
|
2023-10-06 13:37:14 +00:00
|
|
|
|
pipe_bpp = intel_dp_dsc_compute_max_bpp(connector, U8_MAX);
|
2021-05-14 15:36:56 +00:00
|
|
|
|
|
2023-04-27 12:56:05 +00:00
|
|
|
|
/*
|
|
|
|
|
* Output bpp is stored in 6.4 format so right shift by 4 to get the
|
|
|
|
|
* integer value since we support only integer values of bpp.
|
|
|
|
|
*/
|
2018-10-31 00:19:22 +00:00
|
|
|
|
if (intel_dp_is_edp(intel_dp)) {
|
2023-08-17 14:24:45 +00:00
|
|
|
|
dsc_max_compressed_bpp =
|
2023-10-06 13:37:24 +00:00
|
|
|
|
drm_edp_dsc_sink_output_bpp(connector->dp.dsc_dpcd) >> 4;
|
2018-10-31 00:19:22 +00:00
|
|
|
|
dsc_slice_count =
|
2023-10-06 13:37:24 +00:00
|
|
|
|
drm_dp_dsc_sink_max_slice_count(connector->dp.dsc_dpcd,
|
2018-10-31 00:19:22 +00:00
|
|
|
|
true);
|
2023-10-06 13:37:24 +00:00
|
|
|
|
} else if (drm_dp_sink_supports_fec(connector->dp.fec_capability)) {
|
2023-08-17 14:24:45 +00:00
|
|
|
|
dsc_max_compressed_bpp =
|
|
|
|
|
intel_dp_dsc_get_max_compressed_bpp(dev_priv,
|
|
|
|
|
max_link_clock,
|
|
|
|
|
max_lanes,
|
|
|
|
|
target_clock,
|
|
|
|
|
mode->hdisplay,
|
|
|
|
|
bigjoiner,
|
|
|
|
|
output_format,
|
2023-08-17 14:24:52 +00:00
|
|
|
|
pipe_bpp, 64);
|
2018-10-31 00:19:22 +00:00
|
|
|
|
dsc_slice_count =
|
2023-10-06 13:37:23 +00:00
|
|
|
|
intel_dp_dsc_get_slice_count(connector,
|
2018-10-31 00:19:22 +00:00
|
|
|
|
target_clock,
|
2020-11-17 19:47:05 +00:00
|
|
|
|
mode->hdisplay,
|
|
|
|
|
bigjoiner);
|
2018-10-31 00:19:22 +00:00
|
|
|
|
}
|
2020-11-17 19:47:05 +00:00
|
|
|
|
|
2023-08-17 14:24:45 +00:00
|
|
|
|
dsc = dsc_max_compressed_bpp && dsc_slice_count;
|
2018-10-31 00:19:22 +00:00
|
|
|
|
}
|
|
|
|
|
|
2021-05-14 15:37:05 +00:00
|
|
|
|
/*
|
|
|
|
|
* Big joiner configuration needs DSC for TGL which is not true for
|
|
|
|
|
* XE_LPD where uncompressed joiner is supported.
|
|
|
|
|
*/
|
|
|
|
|
if (DISPLAY_VER(dev_priv) < 13 && bigjoiner && !dsc)
|
2012-04-10 08:42:36 +00:00
|
|
|
|
return MODE_CLOCK_HIGH;
|
2009-04-07 23:16:42 +00:00
|
|
|
|
|
2020-11-17 19:47:05 +00:00
|
|
|
|
if (mode_rate > max_rate && !dsc)
|
2012-04-10 08:42:36 +00:00
|
|
|
|
return MODE_CLOCK_HIGH;
|
2012-05-23 09:30:55 +00:00
|
|
|
|
|
2022-03-22 12:00:09 +00:00
|
|
|
|
status = intel_dp_mode_valid_downstream(connector, mode, target_clock);
|
2020-09-04 11:53:45 +00:00
|
|
|
|
if (status != MODE_OK)
|
|
|
|
|
return status;
|
|
|
|
|
|
2020-11-17 19:47:05 +00:00
|
|
|
|
return intel_mode_valid_max_plane_size(dev_priv, mode, bigjoiner);
|
2009-04-07 23:16:42 +00:00
|
|
|
|
}
|
|
|
|
|
|
2021-09-29 16:24:04 +00:00
|
|
|
|
bool intel_dp_source_supports_tps3(struct drm_i915_private *i915)
|
2015-08-18 10:00:37 +00:00
|
|
|
|
{
|
2021-09-29 16:24:04 +00:00
|
|
|
|
return DISPLAY_VER(i915) >= 9 || IS_BROADWELL(i915) || IS_HASWELL(i915);
|
2015-08-18 10:00:37 +00:00
|
|
|
|
}
|
|
|
|
|
|
2021-09-29 16:24:04 +00:00
|
|
|
|
bool intel_dp_source_supports_tps4(struct drm_i915_private *i915)
|
2018-06-11 22:26:55 +00:00
|
|
|
|
{
|
2021-09-29 16:24:04 +00:00
|
|
|
|
return DISPLAY_VER(i915) >= 10;
|
2018-06-11 22:26:55 +00:00
|
|
|
|
}
|
|
|
|
|
|
2015-03-12 15:10:39 +00:00
|
|
|
|
static void snprintf_int_array(char *str, size_t len,
|
|
|
|
|
const int *array, int nelem)
|
|
|
|
|
{
|
|
|
|
|
int i;
|
|
|
|
|
|
|
|
|
|
str[0] = '\0';
|
|
|
|
|
|
|
|
|
|
for (i = 0; i < nelem; i++) {
|
drm/i915/dp: make link rate printing prettier
Turn
[drm:intel_dp_print_rates] source rates: 162000,270000,540000,
[drm:intel_dp_print_rates] sink rates: 162000,270000,
[drm:intel_dp_print_rates] common rates: 162000,270000,
into
[drm:intel_dp_print_rates] source rates: 162000, 270000, 540000
[drm:intel_dp_print_rates] sink rates: 162000, 270000
[drm:intel_dp_print_rates] common rates: 162000, 270000
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-05-18 13:01:45 +00:00
|
|
|
|
int r = snprintf(str, len, "%s%d", i ? ", " : "", array[i]);
|
2015-03-12 15:10:39 +00:00
|
|
|
|
if (r >= len)
|
|
|
|
|
return;
|
|
|
|
|
str += r;
|
|
|
|
|
len -= r;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static void intel_dp_print_rates(struct intel_dp *intel_dp)
|
|
|
|
|
{
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
|
2015-03-12 15:10:39 +00:00
|
|
|
|
char str[128]; /* FIXME: too big for stack? */
|
|
|
|
|
|
2019-10-28 10:38:15 +00:00
|
|
|
|
if (!drm_debug_enabled(DRM_UT_KMS))
|
2015-03-12 15:10:39 +00:00
|
|
|
|
return;
|
|
|
|
|
|
2017-03-28 14:59:04 +00:00
|
|
|
|
snprintf_int_array(str, sizeof(str),
|
|
|
|
|
intel_dp->source_rates, intel_dp->num_source_rates);
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm, "source rates: %s\n", str);
|
2015-03-12 15:10:39 +00:00
|
|
|
|
|
2017-03-28 14:59:05 +00:00
|
|
|
|
snprintf_int_array(str, sizeof(str),
|
|
|
|
|
intel_dp->sink_rates, intel_dp->num_sink_rates);
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm, "sink rates: %s\n", str);
|
2015-03-12 15:10:39 +00:00
|
|
|
|
|
2017-04-06 13:44:10 +00:00
|
|
|
|
snprintf_int_array(str, sizeof(str),
|
|
|
|
|
intel_dp->common_rates, intel_dp->num_common_rates);
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm, "common rates: %s\n", str);
|
2015-03-12 15:10:39 +00:00
|
|
|
|
}
|
|
|
|
|
|
2015-03-12 15:10:34 +00:00
|
|
|
|
int
|
|
|
|
|
intel_dp_max_link_rate(struct intel_dp *intel_dp)
|
|
|
|
|
{
|
|
|
|
|
int len;
|
|
|
|
|
|
2017-04-06 13:44:12 +00:00
|
|
|
|
len = intel_dp_common_len_rate_limit(intel_dp, intel_dp->max_link_rate);
|
2015-03-12 15:10:34 +00:00
|
|
|
|
|
2021-10-18 09:41:54 +00:00
|
|
|
|
return intel_dp_common_rate(intel_dp, len - 1);
|
2015-03-12 15:10:34 +00:00
|
|
|
|
}
|
|
|
|
|
|
2015-03-12 15:10:36 +00:00
|
|
|
|
int intel_dp_rate_select(struct intel_dp *intel_dp, int rate)
|
|
|
|
|
{
|
drm/i915/display/dp: Prefer drm_WARN* over WARN*
struct drm_device specific drm_WARN* macros include device information
in the backtrace, so we know what device the warnings originate from.
Prefer drm_WARN* over WARN* at places where struct intel_dp or struct
drm_i915_private pointer is available.
Conversion is done with below sementic patch:
@rule1@
identifier func, T;
@@
func(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-WARN_ON(
+drm_WARN_ON(&T->drm,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(&T->drm,
...)
)
...+>
}
@rule2@
identifier func, T;
@@
func(struct drm_i915_private *T,...) {
<+...
(
-WARN_ON(
+drm_WARN_ON(&T->drm,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(&T->drm,
...)
)
...+>
}
@rule3@
identifier func, T;
@@
func(struct intel_dp *T,...) {
+ struct drm_i915_private *i915 = dp_to_i915(T);
<+...
(
-WARN_ON(
+drm_WARN_ON(&i915->drm,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(&i915->drm,
...)
)
...+>
}
@rule4@
identifier func, T;
@@
func(...) {
...
struct intel_dp *T = ...;
+ struct drm_i915_private *i915 = dp_to_i915(T);
<+...
(
-WARN_ON(
+drm_WARN_ON(&i915->drm,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(&i915->drm,
...)
)
...+>
}
Signed-off-by: Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200504181600.18503-3-pankaj.laxminarayan.bharadiya@intel.com
2020-05-04 18:15:53 +00:00
|
|
|
|
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
|
2017-03-28 14:59:03 +00:00
|
|
|
|
int i = intel_dp_rate_index(intel_dp->sink_rates,
|
|
|
|
|
intel_dp->num_sink_rates, rate);
|
2017-03-28 14:59:02 +00:00
|
|
|
|
|
drm/i915/display/dp: Prefer drm_WARN* over WARN*
struct drm_device specific drm_WARN* macros include device information
in the backtrace, so we know what device the warnings originate from.
Prefer drm_WARN* over WARN* at places where struct intel_dp or struct
drm_i915_private pointer is available.
Conversion is done with below sementic patch:
@rule1@
identifier func, T;
@@
func(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-WARN_ON(
+drm_WARN_ON(&T->drm,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(&T->drm,
...)
)
...+>
}
@rule2@
identifier func, T;
@@
func(struct drm_i915_private *T,...) {
<+...
(
-WARN_ON(
+drm_WARN_ON(&T->drm,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(&T->drm,
...)
)
...+>
}
@rule3@
identifier func, T;
@@
func(struct intel_dp *T,...) {
+ struct drm_i915_private *i915 = dp_to_i915(T);
<+...
(
-WARN_ON(
+drm_WARN_ON(&i915->drm,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(&i915->drm,
...)
)
...+>
}
@rule4@
identifier func, T;
@@
func(...) {
...
struct intel_dp *T = ...;
+ struct drm_i915_private *i915 = dp_to_i915(T);
<+...
(
-WARN_ON(
+drm_WARN_ON(&i915->drm,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(&i915->drm,
...)
)
...+>
}
Signed-off-by: Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200504181600.18503-3-pankaj.laxminarayan.bharadiya@intel.com
2020-05-04 18:15:53 +00:00
|
|
|
|
if (drm_WARN_ON(&i915->drm, i < 0))
|
2017-03-28 14:59:02 +00:00
|
|
|
|
i = 0;
|
|
|
|
|
|
|
|
|
|
return i;
|
2015-03-12 15:10:36 +00:00
|
|
|
|
}
|
|
|
|
|
|
2015-10-23 10:01:48 +00:00
|
|
|
|
void intel_dp_compute_rate(struct intel_dp *intel_dp, int port_clock,
|
2019-01-16 09:15:27 +00:00
|
|
|
|
u8 *link_bw, u8 *rate_select)
|
2015-07-06 12:10:06 +00:00
|
|
|
|
{
|
2017-03-28 14:59:05 +00:00
|
|
|
|
/* eDP 1.4 rate select method. */
|
|
|
|
|
if (intel_dp->use_rate_select) {
|
2015-07-06 12:10:06 +00:00
|
|
|
|
*link_bw = 0;
|
|
|
|
|
*rate_select =
|
|
|
|
|
intel_dp_rate_select(intel_dp, port_clock);
|
|
|
|
|
} else {
|
|
|
|
|
*link_bw = drm_dp_link_rate_to_bw_code(port_clock);
|
|
|
|
|
*rate_select = 0;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2023-05-30 09:08:16 +00:00
|
|
|
|
bool intel_dp_has_hdmi_sink(struct intel_dp *intel_dp)
|
|
|
|
|
{
|
|
|
|
|
struct intel_connector *connector = intel_dp->attached_connector;
|
|
|
|
|
|
|
|
|
|
return connector->base.display_info.is_hdmi;
|
|
|
|
|
}
|
|
|
|
|
|
2018-11-28 20:26:25 +00:00
|
|
|
|
static bool intel_dp_source_supports_fec(struct intel_dp *intel_dp,
|
2018-11-28 21:36:21 +00:00
|
|
|
|
const struct intel_crtc_state *pipe_config)
|
|
|
|
|
{
|
2023-05-02 14:38:59 +00:00
|
|
|
|
struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
|
2018-11-28 21:36:21 +00:00
|
|
|
|
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
|
|
|
|
|
|
2021-03-20 04:42:42 +00:00
|
|
|
|
if (DISPLAY_VER(dev_priv) >= 12)
|
2019-08-23 00:46:55 +00:00
|
|
|
|
return true;
|
|
|
|
|
|
2024-04-02 13:51:47 +00:00
|
|
|
|
if (DISPLAY_VER(dev_priv) == 11 && encoder->port != PORT_A &&
|
|
|
|
|
!intel_crtc_has_type(pipe_config, INTEL_OUTPUT_DP_MST))
|
2019-08-23 00:46:55 +00:00
|
|
|
|
return true;
|
|
|
|
|
|
|
|
|
|
return false;
|
2018-11-28 20:26:25 +00:00
|
|
|
|
}
|
|
|
|
|
|
2023-10-24 01:09:05 +00:00
|
|
|
|
bool intel_dp_supports_fec(struct intel_dp *intel_dp,
|
|
|
|
|
const struct intel_connector *connector,
|
|
|
|
|
const struct intel_crtc_state *pipe_config)
|
2018-11-28 20:26:25 +00:00
|
|
|
|
{
|
|
|
|
|
return intel_dp_source_supports_fec(intel_dp, pipe_config) &&
|
2023-10-06 13:37:15 +00:00
|
|
|
|
drm_dp_sink_supports_fec(connector->dp.fec_capability);
|
2018-11-28 20:26:25 +00:00
|
|
|
|
}
|
|
|
|
|
|
2023-10-06 13:37:16 +00:00
|
|
|
|
static bool intel_dp_supports_dsc(const struct intel_connector *connector,
|
2019-12-10 10:50:51 +00:00
|
|
|
|
const struct intel_crtc_state *crtc_state)
|
2018-11-28 21:36:21 +00:00
|
|
|
|
{
|
2020-11-17 19:47:07 +00:00
|
|
|
|
if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_DP) && !crtc_state->fec_enable)
|
2018-11-28 20:26:25 +00:00
|
|
|
|
return false;
|
|
|
|
|
|
2020-11-17 19:47:07 +00:00
|
|
|
|
return intel_dsc_source_support(crtc_state) &&
|
2023-10-24 01:09:20 +00:00
|
|
|
|
connector->dp.dsc_decompression_aux &&
|
2023-10-06 13:37:16 +00:00
|
|
|
|
drm_dp_sink_supports_dsc(connector->dp.dsc_dpcd);
|
2018-11-28 21:36:21 +00:00
|
|
|
|
}
|
|
|
|
|
|
2022-03-22 12:00:11 +00:00
|
|
|
|
static int intel_dp_hdmi_compute_bpc(struct intel_dp *intel_dp,
|
|
|
|
|
const struct intel_crtc_state *crtc_state,
|
2022-03-22 12:00:13 +00:00
|
|
|
|
int bpc, bool respect_downstream_limits)
|
2020-09-04 11:53:52 +00:00
|
|
|
|
{
|
2022-03-22 12:00:04 +00:00
|
|
|
|
int clock = crtc_state->hw.adjusted_mode.crtc_clock;
|
2020-09-04 11:53:52 +00:00
|
|
|
|
|
2022-03-22 12:00:11 +00:00
|
|
|
|
/*
|
|
|
|
|
* Current bpc could already be below 8bpc due to
|
|
|
|
|
* FDI bandwidth constraints or other limits.
|
|
|
|
|
* HDMI minimum is 8bpc however.
|
|
|
|
|
*/
|
|
|
|
|
bpc = max(bpc, 8);
|
|
|
|
|
|
2022-03-22 12:00:13 +00:00
|
|
|
|
/*
|
|
|
|
|
* We will never exceed downstream TMDS clock limits while
|
|
|
|
|
* attempting deep color. If the user insists on forcing an
|
|
|
|
|
* out of spec mode they will have to be satisfied with 8bpc.
|
|
|
|
|
*/
|
|
|
|
|
if (!respect_downstream_limits)
|
|
|
|
|
bpc = 8;
|
|
|
|
|
|
2022-03-22 12:00:11 +00:00
|
|
|
|
for (; bpc >= 8; bpc -= 2) {
|
|
|
|
|
if (intel_hdmi_bpc_possible(crtc_state, bpc,
|
2023-05-30 09:08:16 +00:00
|
|
|
|
intel_dp_has_hdmi_sink(intel_dp)) &&
|
2023-04-27 12:56:03 +00:00
|
|
|
|
intel_dp_tmds_clock_valid(intel_dp, clock, bpc, crtc_state->sink_format,
|
2022-03-22 12:00:13 +00:00
|
|
|
|
respect_downstream_limits) == MODE_OK)
|
2022-03-22 12:00:11 +00:00
|
|
|
|
return bpc;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
return -EINVAL;
|
2020-09-04 11:53:52 +00:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static int intel_dp_max_bpp(struct intel_dp *intel_dp,
|
2022-03-22 12:00:13 +00:00
|
|
|
|
const struct intel_crtc_state *crtc_state,
|
|
|
|
|
bool respect_downstream_limits)
|
2016-09-09 11:10:56 +00:00
|
|
|
|
{
|
2018-08-27 22:30:20 +00:00
|
|
|
|
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
|
2018-04-26 08:25:27 +00:00
|
|
|
|
struct intel_connector *intel_connector = intel_dp->attached_connector;
|
2020-09-04 11:53:52 +00:00
|
|
|
|
int bpp, bpc;
|
2016-09-09 11:10:56 +00:00
|
|
|
|
|
2020-09-04 11:53:52 +00:00
|
|
|
|
bpc = crtc_state->pipe_bpp / 3;
|
2016-09-09 11:10:56 +00:00
|
|
|
|
|
2020-09-04 11:53:41 +00:00
|
|
|
|
if (intel_dp->dfp.max_bpc)
|
2020-09-04 11:53:52 +00:00
|
|
|
|
bpc = min_t(int, bpc, intel_dp->dfp.max_bpc);
|
|
|
|
|
|
|
|
|
|
if (intel_dp->dfp.min_tmds_clock) {
|
2022-03-22 12:00:11 +00:00
|
|
|
|
int max_hdmi_bpc;
|
|
|
|
|
|
2022-03-22 12:00:13 +00:00
|
|
|
|
max_hdmi_bpc = intel_dp_hdmi_compute_bpc(intel_dp, crtc_state, bpc,
|
|
|
|
|
respect_downstream_limits);
|
2022-03-22 12:00:11 +00:00
|
|
|
|
if (max_hdmi_bpc < 0)
|
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
|
|
bpc = min(bpc, max_hdmi_bpc);
|
2020-09-04 11:53:52 +00:00
|
|
|
|
}
|
2016-09-09 11:10:56 +00:00
|
|
|
|
|
2020-09-04 11:53:52 +00:00
|
|
|
|
bpp = bpc * 3;
|
2018-04-26 08:25:27 +00:00
|
|
|
|
if (intel_dp_is_edp(intel_dp)) {
|
|
|
|
|
/* Get bpp from vbt only for panels that dont have bpp in edid */
|
|
|
|
|
if (intel_connector->base.display_info.bpc == 0 &&
|
2022-05-10 10:42:39 +00:00
|
|
|
|
intel_connector->panel.vbt.edp.bpp &&
|
|
|
|
|
intel_connector->panel.vbt.edp.bpp < bpp) {
|
drm/i915/dp: conversion to struct drm_device logging macros.
This converts various instances of printk based logging macros in
i915/display/intel_dp.c with the new struct drm_device based logging
macros using the following coccinelle script:
@rule1@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@rule2@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
New checkpatch warnings were fixed manually.
v2: fix merge conflict with new changes in file.
Signed-off-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200122110844.2022-5-wambui.karugax@gmail.com
2020-01-22 11:08:42 +00:00
|
|
|
|
drm_dbg_kms(&dev_priv->drm,
|
|
|
|
|
"clamping bpp for eDP panel to BIOS-provided %i\n",
|
2022-05-10 10:42:39 +00:00
|
|
|
|
intel_connector->panel.vbt.edp.bpp);
|
|
|
|
|
bpp = intel_connector->panel.vbt.edp.bpp;
|
2018-04-26 08:25:27 +00:00
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2016-09-09 11:10:56 +00:00
|
|
|
|
return bpp;
|
|
|
|
|
}
|
|
|
|
|
|
2018-04-26 08:25:30 +00:00
|
|
|
|
/* Adjust link config limits based on compliance test requests. */
|
2019-03-26 14:25:53 +00:00
|
|
|
|
void
|
2018-04-26 08:25:30 +00:00
|
|
|
|
intel_dp_adjust_compliance_config(struct intel_dp *intel_dp,
|
|
|
|
|
struct intel_crtc_state *pipe_config,
|
|
|
|
|
struct link_config_limits *limits)
|
|
|
|
|
{
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
|
|
|
|
|
|
2018-04-26 08:25:30 +00:00
|
|
|
|
/* For DP Compliance we override the computed bpp for the pipe */
|
|
|
|
|
if (intel_dp->compliance.test_data.bpc != 0) {
|
|
|
|
|
int bpp = 3 * intel_dp->compliance.test_data.bpc;
|
|
|
|
|
|
2023-09-21 19:51:50 +00:00
|
|
|
|
limits->pipe.min_bpp = limits->pipe.max_bpp = bpp;
|
2018-04-26 08:25:30 +00:00
|
|
|
|
pipe_config->dither_force_disable = bpp == 6 * 3;
|
|
|
|
|
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm, "Setting pipe_bpp to %d\n", bpp);
|
2018-04-26 08:25:30 +00:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/* Use values requested by Compliance Test Request */
|
|
|
|
|
if (intel_dp->compliance.test_type == DP_TEST_LINK_TRAINING) {
|
|
|
|
|
int index;
|
|
|
|
|
|
|
|
|
|
/* Validate the compliance test data since max values
|
|
|
|
|
* might have changed due to link train fallback.
|
|
|
|
|
*/
|
|
|
|
|
if (intel_dp_link_params_valid(intel_dp, intel_dp->compliance.test_link_rate,
|
|
|
|
|
intel_dp->compliance.test_lane_count)) {
|
|
|
|
|
index = intel_dp_rate_index(intel_dp->common_rates,
|
|
|
|
|
intel_dp->num_common_rates,
|
|
|
|
|
intel_dp->compliance.test_link_rate);
|
|
|
|
|
if (index >= 0)
|
2021-08-23 16:18:06 +00:00
|
|
|
|
limits->min_rate = limits->max_rate =
|
|
|
|
|
intel_dp->compliance.test_link_rate;
|
2018-04-26 08:25:30 +00:00
|
|
|
|
limits->min_lane_count = limits->max_lane_count =
|
|
|
|
|
intel_dp->compliance.test_lane_count;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2022-09-07 09:10:55 +00:00
|
|
|
|
static bool has_seamless_m_n(struct intel_connector *connector)
|
|
|
|
|
{
|
|
|
|
|
struct drm_i915_private *i915 = to_i915(connector->base.dev);
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
|
* Seamless M/N reprogramming only implemented
|
|
|
|
|
* for BDW+ double buffered M/N registers so far.
|
|
|
|
|
*/
|
|
|
|
|
return HAS_DOUBLE_BUFFERED_M_N(i915) &&
|
|
|
|
|
intel_panel_drrs_type(connector) == DRRS_TYPE_SEAMLESS;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static int intel_dp_mode_clock(const struct intel_crtc_state *crtc_state,
|
|
|
|
|
const struct drm_connector_state *conn_state)
|
|
|
|
|
{
|
|
|
|
|
struct intel_connector *connector = to_intel_connector(conn_state->connector);
|
|
|
|
|
const struct drm_display_mode *adjusted_mode = &crtc_state->hw.adjusted_mode;
|
|
|
|
|
|
|
|
|
|
/* FIXME a bit of a mess wrt clock vs. crtc_clock */
|
|
|
|
|
if (has_seamless_m_n(connector))
|
|
|
|
|
return intel_panel_highest_mode(connector, adjusted_mode)->clock;
|
|
|
|
|
else
|
|
|
|
|
return adjusted_mode->crtc_clock;
|
|
|
|
|
}
|
|
|
|
|
|
2018-04-26 08:25:29 +00:00
|
|
|
|
/* Optimize link config in order: max bpp, min clock, min lanes */
|
2019-01-15 20:08:00 +00:00
|
|
|
|
static int
|
2018-04-26 08:25:29 +00:00
|
|
|
|
intel_dp_compute_link_config_wide(struct intel_dp *intel_dp,
|
|
|
|
|
struct intel_crtc_state *pipe_config,
|
2022-09-07 09:10:55 +00:00
|
|
|
|
const struct drm_connector_state *conn_state,
|
2018-04-26 08:25:29 +00:00
|
|
|
|
const struct link_config_limits *limits)
|
|
|
|
|
{
|
2022-09-07 09:10:55 +00:00
|
|
|
|
int bpp, i, lane_count, clock = intel_dp_mode_clock(pipe_config, conn_state);
|
2021-08-23 16:18:06 +00:00
|
|
|
|
int mode_rate, link_rate, link_avail;
|
2018-04-26 08:25:29 +00:00
|
|
|
|
|
2023-09-21 19:51:50 +00:00
|
|
|
|
for (bpp = to_bpp_int(limits->link.max_bpp_x16);
|
|
|
|
|
bpp >= to_bpp_int(limits->link.min_bpp_x16);
|
|
|
|
|
bpp -= 2 * 3) {
|
2023-08-17 14:24:45 +00:00
|
|
|
|
int link_bpp = intel_dp_output_bpp(pipe_config->output_format, bpp);
|
2019-07-10 12:58:51 +00:00
|
|
|
|
|
2023-08-17 14:24:45 +00:00
|
|
|
|
mode_rate = intel_dp_link_required(clock, link_bpp);
|
2018-04-26 08:25:29 +00:00
|
|
|
|
|
2021-08-23 16:18:06 +00:00
|
|
|
|
for (i = 0; i < intel_dp->num_common_rates; i++) {
|
2021-10-18 09:41:54 +00:00
|
|
|
|
link_rate = intel_dp_common_rate(intel_dp, i);
|
2021-08-23 16:18:06 +00:00
|
|
|
|
if (link_rate < limits->min_rate ||
|
|
|
|
|
link_rate > limits->max_rate)
|
|
|
|
|
continue;
|
|
|
|
|
|
2018-04-26 08:25:29 +00:00
|
|
|
|
for (lane_count = limits->min_lane_count;
|
|
|
|
|
lane_count <= limits->max_lane_count;
|
|
|
|
|
lane_count <<= 1) {
|
2021-08-23 16:18:06 +00:00
|
|
|
|
link_avail = intel_dp_max_data_rate(link_rate,
|
2018-04-26 08:25:29 +00:00
|
|
|
|
lane_count);
|
|
|
|
|
|
|
|
|
|
if (mode_rate <= link_avail) {
|
|
|
|
|
pipe_config->lane_count = lane_count;
|
|
|
|
|
pipe_config->pipe_bpp = bpp;
|
2021-08-23 16:18:06 +00:00
|
|
|
|
pipe_config->port_clock = link_rate;
|
2018-04-26 08:25:29 +00:00
|
|
|
|
|
2019-01-15 20:08:00 +00:00
|
|
|
|
return 0;
|
2018-04-26 08:25:29 +00:00
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2019-01-15 20:08:00 +00:00
|
|
|
|
return -EINVAL;
|
2018-04-26 08:25:29 +00:00
|
|
|
|
}
|
|
|
|
|
|
2023-08-17 14:24:50 +00:00
|
|
|
|
static
|
|
|
|
|
u8 intel_dp_dsc_max_src_input_bpc(struct drm_i915_private *i915)
|
|
|
|
|
{
|
|
|
|
|
/* Max DSC Input BPC for ICL is 10 and for TGL+ is 12 */
|
|
|
|
|
if (DISPLAY_VER(i915) >= 12)
|
|
|
|
|
return 12;
|
|
|
|
|
if (DISPLAY_VER(i915) == 11)
|
|
|
|
|
return 10;
|
|
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
}
|
|
|
|
|
|
2023-10-06 13:37:14 +00:00
|
|
|
|
int intel_dp_dsc_compute_max_bpp(const struct intel_connector *connector,
|
|
|
|
|
u8 max_req_bpc)
|
2018-11-28 21:36:21 +00:00
|
|
|
|
{
|
2023-10-06 13:37:14 +00:00
|
|
|
|
struct drm_i915_private *i915 = to_i915(connector->base.dev);
|
2018-11-28 21:36:21 +00:00
|
|
|
|
int i, num_bpc;
|
2023-10-12 12:24:37 +00:00
|
|
|
|
u8 dsc_bpc[3] = {};
|
2021-05-14 15:36:55 +00:00
|
|
|
|
u8 dsc_max_bpc;
|
|
|
|
|
|
2023-08-17 14:24:50 +00:00
|
|
|
|
dsc_max_bpc = intel_dp_dsc_max_src_input_bpc(i915);
|
|
|
|
|
|
|
|
|
|
if (!dsc_max_bpc)
|
|
|
|
|
return dsc_max_bpc;
|
|
|
|
|
|
|
|
|
|
dsc_max_bpc = min_t(u8, dsc_max_bpc, max_req_bpc);
|
2018-11-28 21:36:21 +00:00
|
|
|
|
|
2023-10-06 13:37:14 +00:00
|
|
|
|
num_bpc = drm_dp_dsc_sink_supported_input_bpcs(connector->dp.dsc_dpcd,
|
2018-11-28 21:36:21 +00:00
|
|
|
|
dsc_bpc);
|
|
|
|
|
for (i = 0; i < num_bpc; i++) {
|
|
|
|
|
if (dsc_max_bpc >= dsc_bpc[i])
|
|
|
|
|
return dsc_bpc[i] * 3;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
}
|
|
|
|
|
|
2023-10-06 13:37:19 +00:00
|
|
|
|
static int intel_dp_source_dsc_version_minor(struct drm_i915_private *i915)
|
2022-08-17 12:45:15 +00:00
|
|
|
|
{
|
|
|
|
|
return DISPLAY_VER(i915) >= 14 ? 2 : 1;
|
|
|
|
|
}
|
|
|
|
|
|
2023-10-06 13:37:20 +00:00
|
|
|
|
static int intel_dp_sink_dsc_version_minor(const u8 dsc_dpcd[DP_DSC_RECEIVER_CAP_SIZE])
|
2022-08-17 12:45:15 +00:00
|
|
|
|
{
|
2023-10-06 13:37:20 +00:00
|
|
|
|
return (dsc_dpcd[DP_DSC_REV - DP_DSC_SUPPORT] & DP_DSC_MINOR_MASK) >>
|
2022-08-17 12:45:15 +00:00
|
|
|
|
DP_DSC_MINOR_SHIFT;
|
|
|
|
|
}
|
2019-12-10 10:50:48 +00:00
|
|
|
|
|
2023-02-14 05:20:17 +00:00
|
|
|
|
static int intel_dp_get_slice_height(int vactive)
|
|
|
|
|
{
|
|
|
|
|
int slice_height;
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
|
* VDSC 1.2a spec in Section 3.8 Options for Slices implies that 108
|
|
|
|
|
* lines is an optimal slice height, but any size can be used as long as
|
|
|
|
|
* vertical active integer multiple and maximum vertical slice count
|
|
|
|
|
* requirements are met.
|
|
|
|
|
*/
|
|
|
|
|
for (slice_height = 108; slice_height <= vactive; slice_height += 2)
|
|
|
|
|
if (vactive % slice_height == 0)
|
|
|
|
|
return slice_height;
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
|
* Highly unlikely we reach here as most of the resolutions will end up
|
|
|
|
|
* finding appropriate slice_height in above loop but returning
|
|
|
|
|
* slice_height as 2 here as it should work with all resolutions.
|
|
|
|
|
*/
|
|
|
|
|
return 2;
|
|
|
|
|
}
|
|
|
|
|
|
2023-10-06 13:37:21 +00:00
|
|
|
|
static int intel_dp_dsc_compute_params(const struct intel_connector *connector,
|
2019-12-10 10:50:48 +00:00
|
|
|
|
struct intel_crtc_state *crtc_state)
|
|
|
|
|
{
|
2023-10-06 13:37:21 +00:00
|
|
|
|
struct drm_i915_private *i915 = to_i915(connector->base.dev);
|
2019-12-10 10:50:48 +00:00
|
|
|
|
struct drm_dsc_config *vdsc_cfg = &crtc_state->dsc.config;
|
|
|
|
|
u8 line_buf_depth;
|
|
|
|
|
int ret;
|
|
|
|
|
|
2020-12-08 12:33:52 +00:00
|
|
|
|
/*
|
|
|
|
|
* RC_MODEL_SIZE is currently a constant across all configurations.
|
|
|
|
|
*
|
|
|
|
|
* FIXME: Look into using sink defined DPCD DP_DSC_RC_BUF_BLK_SIZE and
|
|
|
|
|
* DP_DSC_RC_BUF_SIZE for this.
|
|
|
|
|
*/
|
|
|
|
|
vdsc_cfg->rc_model_size = DSC_RC_MODEL_SIZE_CONST;
|
2022-09-02 10:32:19 +00:00
|
|
|
|
vdsc_cfg->pic_height = crtc_state->hw.adjusted_mode.crtc_vdisplay;
|
2020-12-08 12:33:52 +00:00
|
|
|
|
|
2023-02-14 05:20:17 +00:00
|
|
|
|
vdsc_cfg->slice_height = intel_dp_get_slice_height(vdsc_cfg->pic_height);
|
2019-12-10 10:50:49 +00:00
|
|
|
|
|
2021-10-22 10:33:01 +00:00
|
|
|
|
ret = intel_dsc_compute_params(crtc_state);
|
2021-05-14 15:36:57 +00:00
|
|
|
|
if (ret)
|
|
|
|
|
return ret;
|
|
|
|
|
|
2019-12-10 10:50:48 +00:00
|
|
|
|
vdsc_cfg->dsc_version_major =
|
2023-10-06 13:37:21 +00:00
|
|
|
|
(connector->dp.dsc_dpcd[DP_DSC_REV - DP_DSC_SUPPORT] &
|
2019-12-10 10:50:48 +00:00
|
|
|
|
DP_DSC_MAJOR_MASK) >> DP_DSC_MAJOR_SHIFT;
|
|
|
|
|
vdsc_cfg->dsc_version_minor =
|
2023-10-06 13:37:19 +00:00
|
|
|
|
min(intel_dp_source_dsc_version_minor(i915),
|
2023-10-06 13:37:21 +00:00
|
|
|
|
intel_dp_sink_dsc_version_minor(connector->dp.dsc_dpcd));
|
2023-03-09 06:28:53 +00:00
|
|
|
|
if (vdsc_cfg->convert_rgb)
|
|
|
|
|
vdsc_cfg->convert_rgb =
|
2023-10-06 13:37:21 +00:00
|
|
|
|
connector->dp.dsc_dpcd[DP_DSC_DEC_COLOR_FORMAT_CAP - DP_DSC_SUPPORT] &
|
2023-03-09 06:28:53 +00:00
|
|
|
|
DP_DSC_RGB;
|
2019-12-10 10:50:48 +00:00
|
|
|
|
|
2023-10-06 13:37:21 +00:00
|
|
|
|
line_buf_depth = drm_dp_dsc_sink_line_buf_depth(connector->dp.dsc_dpcd);
|
2019-12-10 10:50:48 +00:00
|
|
|
|
if (!line_buf_depth) {
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm,
|
|
|
|
|
"DSC Sink Line Buffer Depth invalid\n");
|
2019-12-10 10:50:48 +00:00
|
|
|
|
return -EINVAL;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
if (vdsc_cfg->dsc_version_minor == 2)
|
|
|
|
|
vdsc_cfg->line_buf_depth = (line_buf_depth == DSC_1_2_MAX_LINEBUF_DEPTH_BITS) ?
|
|
|
|
|
DSC_1_2_MAX_LINEBUF_DEPTH_VAL : line_buf_depth;
|
|
|
|
|
else
|
|
|
|
|
vdsc_cfg->line_buf_depth = (line_buf_depth > DSC_1_1_MAX_LINEBUF_DEPTH_BITS) ?
|
|
|
|
|
DSC_1_1_MAX_LINEBUF_DEPTH_BITS : line_buf_depth;
|
|
|
|
|
|
|
|
|
|
vdsc_cfg->block_pred_enable =
|
2023-10-06 13:37:21 +00:00
|
|
|
|
connector->dp.dsc_dpcd[DP_DSC_BLK_PREDICTION_SUPPORT - DP_DSC_SUPPORT] &
|
2019-12-10 10:50:48 +00:00
|
|
|
|
DP_DSC_BLK_PREDICTION_IS_SUPPORTED;
|
|
|
|
|
|
|
|
|
|
return drm_dsc_compute_rc_parameters(vdsc_cfg);
|
|
|
|
|
}
|
|
|
|
|
|
2023-10-06 13:37:22 +00:00
|
|
|
|
static bool intel_dp_dsc_supports_format(const struct intel_connector *connector,
|
2023-03-09 06:28:50 +00:00
|
|
|
|
enum intel_output_format output_format)
|
|
|
|
|
{
|
2023-10-06 13:37:22 +00:00
|
|
|
|
struct drm_i915_private *i915 = to_i915(connector->base.dev);
|
2023-03-09 06:28:50 +00:00
|
|
|
|
u8 sink_dsc_format;
|
|
|
|
|
|
|
|
|
|
switch (output_format) {
|
|
|
|
|
case INTEL_OUTPUT_FORMAT_RGB:
|
|
|
|
|
sink_dsc_format = DP_DSC_RGB;
|
|
|
|
|
break;
|
|
|
|
|
case INTEL_OUTPUT_FORMAT_YCBCR444:
|
|
|
|
|
sink_dsc_format = DP_DSC_YCbCr444;
|
|
|
|
|
break;
|
|
|
|
|
case INTEL_OUTPUT_FORMAT_YCBCR420:
|
2023-10-06 13:37:19 +00:00
|
|
|
|
if (min(intel_dp_source_dsc_version_minor(i915),
|
2023-10-06 13:37:22 +00:00
|
|
|
|
intel_dp_sink_dsc_version_minor(connector->dp.dsc_dpcd)) < 2)
|
2023-03-09 06:28:50 +00:00
|
|
|
|
return false;
|
|
|
|
|
sink_dsc_format = DP_DSC_YCbCr420_Native;
|
|
|
|
|
break;
|
|
|
|
|
default:
|
|
|
|
|
return false;
|
|
|
|
|
}
|
|
|
|
|
|
2023-10-06 13:37:22 +00:00
|
|
|
|
return drm_dp_dsc_sink_supports_format(connector->dp.dsc_dpcd, sink_dsc_format);
|
2023-03-09 06:28:50 +00:00
|
|
|
|
}
|
|
|
|
|
|
2023-11-10 10:10:15 +00:00
|
|
|
|
static bool is_bw_sufficient_for_dsc_config(u16 compressed_bppx16, u32 link_clock,
|
2023-08-17 14:24:57 +00:00
|
|
|
|
u32 lane_count, u32 mode_clock,
|
|
|
|
|
enum intel_output_format output_format,
|
|
|
|
|
int timeslots)
|
|
|
|
|
{
|
|
|
|
|
u32 available_bw, required_bw;
|
|
|
|
|
|
2023-11-10 10:10:15 +00:00
|
|
|
|
available_bw = (link_clock * lane_count * timeslots * 16) / 8;
|
|
|
|
|
required_bw = compressed_bppx16 * (intel_dp_mode_to_fec_clock(mode_clock));
|
2023-08-17 14:24:57 +00:00
|
|
|
|
|
|
|
|
|
return available_bw > required_bw;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static int dsc_compute_link_config(struct intel_dp *intel_dp,
|
|
|
|
|
struct intel_crtc_state *pipe_config,
|
|
|
|
|
struct link_config_limits *limits,
|
2023-11-10 10:10:15 +00:00
|
|
|
|
u16 compressed_bppx16,
|
2023-08-17 14:24:57 +00:00
|
|
|
|
int timeslots)
|
|
|
|
|
{
|
|
|
|
|
const struct drm_display_mode *adjusted_mode = &pipe_config->hw.adjusted_mode;
|
|
|
|
|
int link_rate, lane_count;
|
|
|
|
|
int i;
|
|
|
|
|
|
|
|
|
|
for (i = 0; i < intel_dp->num_common_rates; i++) {
|
|
|
|
|
link_rate = intel_dp_common_rate(intel_dp, i);
|
|
|
|
|
if (link_rate < limits->min_rate || link_rate > limits->max_rate)
|
|
|
|
|
continue;
|
|
|
|
|
|
|
|
|
|
for (lane_count = limits->min_lane_count;
|
|
|
|
|
lane_count <= limits->max_lane_count;
|
|
|
|
|
lane_count <<= 1) {
|
2023-11-10 10:10:15 +00:00
|
|
|
|
if (!is_bw_sufficient_for_dsc_config(compressed_bppx16, link_rate,
|
|
|
|
|
lane_count, adjusted_mode->clock,
|
2023-08-17 14:24:57 +00:00
|
|
|
|
pipe_config->output_format,
|
|
|
|
|
timeslots))
|
|
|
|
|
continue;
|
|
|
|
|
|
|
|
|
|
pipe_config->lane_count = lane_count;
|
|
|
|
|
pipe_config->port_clock = link_rate;
|
|
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
return -EINVAL;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static
|
2023-10-06 13:37:17 +00:00
|
|
|
|
u16 intel_dp_dsc_max_sink_compressed_bppx16(const struct intel_connector *connector,
|
2023-08-17 14:24:57 +00:00
|
|
|
|
struct intel_crtc_state *pipe_config,
|
|
|
|
|
int bpc)
|
|
|
|
|
{
|
2023-10-06 13:37:17 +00:00
|
|
|
|
u16 max_bppx16 = drm_edp_dsc_sink_output_bpp(connector->dp.dsc_dpcd);
|
2023-08-17 14:24:57 +00:00
|
|
|
|
|
|
|
|
|
if (max_bppx16)
|
|
|
|
|
return max_bppx16;
|
|
|
|
|
/*
|
|
|
|
|
* If support not given in DPCD 67h, 68h use the Maximum Allowed bit rate
|
|
|
|
|
* values as given in spec Table 2-157 DP v2.0
|
|
|
|
|
*/
|
|
|
|
|
switch (pipe_config->output_format) {
|
|
|
|
|
case INTEL_OUTPUT_FORMAT_RGB:
|
|
|
|
|
case INTEL_OUTPUT_FORMAT_YCBCR444:
|
|
|
|
|
return (3 * bpc) << 4;
|
|
|
|
|
case INTEL_OUTPUT_FORMAT_YCBCR420:
|
|
|
|
|
return (3 * (bpc / 2)) << 4;
|
|
|
|
|
default:
|
|
|
|
|
MISSING_CASE(pipe_config->output_format);
|
|
|
|
|
break;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
}
|
|
|
|
|
|
2023-10-24 01:09:25 +00:00
|
|
|
|
int intel_dp_dsc_sink_min_compressed_bpp(struct intel_crtc_state *pipe_config)
|
2023-08-17 14:24:57 +00:00
|
|
|
|
{
|
|
|
|
|
/* From Mandatory bit rate range Support Table 2-157 (DP v2.0) */
|
|
|
|
|
switch (pipe_config->output_format) {
|
|
|
|
|
case INTEL_OUTPUT_FORMAT_RGB:
|
|
|
|
|
case INTEL_OUTPUT_FORMAT_YCBCR444:
|
|
|
|
|
return 8;
|
|
|
|
|
case INTEL_OUTPUT_FORMAT_YCBCR420:
|
|
|
|
|
return 6;
|
|
|
|
|
default:
|
|
|
|
|
MISSING_CASE(pipe_config->output_format);
|
|
|
|
|
break;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
}
|
|
|
|
|
|
2023-10-24 01:09:25 +00:00
|
|
|
|
int intel_dp_dsc_sink_max_compressed_bpp(const struct intel_connector *connector,
|
|
|
|
|
struct intel_crtc_state *pipe_config,
|
|
|
|
|
int bpc)
|
2023-08-17 14:24:57 +00:00
|
|
|
|
{
|
2023-10-06 13:37:17 +00:00
|
|
|
|
return intel_dp_dsc_max_sink_compressed_bppx16(connector,
|
2023-08-17 14:24:57 +00:00
|
|
|
|
pipe_config, bpc) >> 4;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static int dsc_src_min_compressed_bpp(void)
|
|
|
|
|
{
|
|
|
|
|
/* Min Compressed bpp supported by source is 8 */
|
|
|
|
|
return 8;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static int dsc_src_max_compressed_bpp(struct intel_dp *intel_dp)
|
|
|
|
|
{
|
|
|
|
|
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
|
* Max Compressed bpp for Gen 13+ is 27bpp.
|
|
|
|
|
* For earlier platform is 23bpp. (Bspec:49259).
|
|
|
|
|
*/
|
2023-11-27 14:50:28 +00:00
|
|
|
|
if (DISPLAY_VER(i915) < 13)
|
2023-08-17 14:24:57 +00:00
|
|
|
|
return 23;
|
|
|
|
|
else
|
|
|
|
|
return 27;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
|
* From a list of valid compressed bpps try different compressed bpp and find a
|
|
|
|
|
* suitable link configuration that can support it.
|
|
|
|
|
*/
|
|
|
|
|
static int
|
|
|
|
|
icl_dsc_compute_link_config(struct intel_dp *intel_dp,
|
|
|
|
|
struct intel_crtc_state *pipe_config,
|
|
|
|
|
struct link_config_limits *limits,
|
|
|
|
|
int dsc_max_bpp,
|
|
|
|
|
int dsc_min_bpp,
|
|
|
|
|
int pipe_bpp,
|
|
|
|
|
int timeslots)
|
|
|
|
|
{
|
|
|
|
|
int i, ret;
|
|
|
|
|
|
|
|
|
|
/* Compressed BPP should be less than the Input DSC bpp */
|
|
|
|
|
dsc_max_bpp = min(dsc_max_bpp, pipe_bpp - 1);
|
|
|
|
|
|
|
|
|
|
for (i = 0; i < ARRAY_SIZE(valid_dsc_bpp); i++) {
|
2024-03-05 05:44:43 +00:00
|
|
|
|
if (valid_dsc_bpp[i] < dsc_min_bpp)
|
|
|
|
|
continue;
|
|
|
|
|
if (valid_dsc_bpp[i] > dsc_max_bpp)
|
2023-08-17 14:24:57 +00:00
|
|
|
|
break;
|
|
|
|
|
|
|
|
|
|
ret = dsc_compute_link_config(intel_dp,
|
|
|
|
|
pipe_config,
|
|
|
|
|
limits,
|
2023-11-10 10:10:15 +00:00
|
|
|
|
valid_dsc_bpp[i] << 4,
|
2023-08-17 14:24:57 +00:00
|
|
|
|
timeslots);
|
|
|
|
|
if (ret == 0) {
|
2023-11-10 10:10:11 +00:00
|
|
|
|
pipe_config->dsc.compressed_bpp_x16 =
|
|
|
|
|
to_bpp_x16(valid_dsc_bpp[i]);
|
2023-08-17 14:24:57 +00:00
|
|
|
|
return 0;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
return -EINVAL;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
|
* From XE_LPD onwards we supports compression bpps in steps of 1 up to
|
|
|
|
|
* uncompressed bpp-1. So we start from max compressed bpp and see if any
|
|
|
|
|
* link configuration is able to support that compressed bpp, if not we
|
|
|
|
|
* step down and check for lower compressed bpp.
|
|
|
|
|
*/
|
|
|
|
|
static int
|
|
|
|
|
xelpd_dsc_compute_link_config(struct intel_dp *intel_dp,
|
2023-11-10 10:10:15 +00:00
|
|
|
|
const struct intel_connector *connector,
|
2023-08-17 14:24:57 +00:00
|
|
|
|
struct intel_crtc_state *pipe_config,
|
|
|
|
|
struct link_config_limits *limits,
|
|
|
|
|
int dsc_max_bpp,
|
|
|
|
|
int dsc_min_bpp,
|
|
|
|
|
int pipe_bpp,
|
|
|
|
|
int timeslots)
|
|
|
|
|
{
|
2023-11-10 10:10:15 +00:00
|
|
|
|
u8 bppx16_incr = drm_dp_dsc_sink_bpp_incr(connector->dp.dsc_dpcd);
|
|
|
|
|
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
|
|
|
|
|
u16 compressed_bppx16;
|
|
|
|
|
u8 bppx16_step;
|
2023-08-17 14:24:57 +00:00
|
|
|
|
int ret;
|
|
|
|
|
|
2023-11-10 10:10:15 +00:00
|
|
|
|
if (DISPLAY_VER(i915) < 14 || bppx16_incr <= 1)
|
|
|
|
|
bppx16_step = 16;
|
|
|
|
|
else
|
|
|
|
|
bppx16_step = 16 / bppx16_incr;
|
2023-08-17 14:24:57 +00:00
|
|
|
|
|
|
|
|
|
/* Compressed BPP should be less than the Input DSC bpp */
|
2023-11-10 10:10:15 +00:00
|
|
|
|
dsc_max_bpp = min(dsc_max_bpp << 4, (pipe_bpp << 4) - bppx16_step);
|
|
|
|
|
dsc_min_bpp = dsc_min_bpp << 4;
|
2023-08-17 14:24:57 +00:00
|
|
|
|
|
2023-11-10 10:10:15 +00:00
|
|
|
|
for (compressed_bppx16 = dsc_max_bpp;
|
|
|
|
|
compressed_bppx16 >= dsc_min_bpp;
|
|
|
|
|
compressed_bppx16 -= bppx16_step) {
|
2023-11-10 10:10:17 +00:00
|
|
|
|
if (intel_dp->force_dsc_fractional_bpp_en &&
|
|
|
|
|
!to_bpp_frac(compressed_bppx16))
|
|
|
|
|
continue;
|
2023-08-17 14:24:57 +00:00
|
|
|
|
ret = dsc_compute_link_config(intel_dp,
|
|
|
|
|
pipe_config,
|
|
|
|
|
limits,
|
2023-11-10 10:10:15 +00:00
|
|
|
|
compressed_bppx16,
|
2023-08-17 14:24:57 +00:00
|
|
|
|
timeslots);
|
|
|
|
|
if (ret == 0) {
|
2023-11-10 10:10:15 +00:00
|
|
|
|
pipe_config->dsc.compressed_bpp_x16 = compressed_bppx16;
|
2023-11-10 10:10:17 +00:00
|
|
|
|
if (intel_dp->force_dsc_fractional_bpp_en &&
|
|
|
|
|
to_bpp_frac(compressed_bppx16))
|
|
|
|
|
drm_dbg_kms(&i915->drm, "Forcing DSC fractional bpp\n");
|
|
|
|
|
|
2023-08-17 14:24:57 +00:00
|
|
|
|
return 0;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
return -EINVAL;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static int dsc_compute_compressed_bpp(struct intel_dp *intel_dp,
|
2023-10-06 13:37:17 +00:00
|
|
|
|
const struct intel_connector *connector,
|
2023-08-17 14:24:57 +00:00
|
|
|
|
struct intel_crtc_state *pipe_config,
|
|
|
|
|
struct link_config_limits *limits,
|
|
|
|
|
int pipe_bpp,
|
|
|
|
|
int timeslots)
|
|
|
|
|
{
|
|
|
|
|
const struct drm_display_mode *adjusted_mode = &pipe_config->hw.adjusted_mode;
|
|
|
|
|
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
|
|
|
|
|
int dsc_src_min_bpp, dsc_sink_min_bpp, dsc_min_bpp;
|
|
|
|
|
int dsc_src_max_bpp, dsc_sink_max_bpp, dsc_max_bpp;
|
|
|
|
|
int dsc_joiner_max_bpp;
|
|
|
|
|
|
|
|
|
|
dsc_src_min_bpp = dsc_src_min_compressed_bpp();
|
2023-10-24 01:09:25 +00:00
|
|
|
|
dsc_sink_min_bpp = intel_dp_dsc_sink_min_compressed_bpp(pipe_config);
|
2023-08-17 14:24:57 +00:00
|
|
|
|
dsc_min_bpp = max(dsc_src_min_bpp, dsc_sink_min_bpp);
|
2023-09-21 19:51:53 +00:00
|
|
|
|
dsc_min_bpp = max(dsc_min_bpp, to_bpp_int_roundup(limits->link.min_bpp_x16));
|
2023-08-17 14:24:57 +00:00
|
|
|
|
|
|
|
|
|
dsc_src_max_bpp = dsc_src_max_compressed_bpp(intel_dp);
|
2023-10-24 01:09:25 +00:00
|
|
|
|
dsc_sink_max_bpp = intel_dp_dsc_sink_max_compressed_bpp(connector,
|
|
|
|
|
pipe_config,
|
|
|
|
|
pipe_bpp / 3);
|
2023-08-17 14:24:57 +00:00
|
|
|
|
dsc_max_bpp = dsc_sink_max_bpp ? min(dsc_sink_max_bpp, dsc_src_max_bpp) : dsc_src_max_bpp;
|
|
|
|
|
|
|
|
|
|
dsc_joiner_max_bpp = get_max_compressed_bpp_with_joiner(i915, adjusted_mode->clock,
|
|
|
|
|
adjusted_mode->hdisplay,
|
|
|
|
|
pipe_config->bigjoiner_pipes);
|
|
|
|
|
dsc_max_bpp = min(dsc_max_bpp, dsc_joiner_max_bpp);
|
2023-09-21 19:51:53 +00:00
|
|
|
|
dsc_max_bpp = min(dsc_max_bpp, to_bpp_int(limits->link.max_bpp_x16));
|
2023-08-17 14:24:57 +00:00
|
|
|
|
|
|
|
|
|
if (DISPLAY_VER(i915) >= 13)
|
2023-11-10 10:10:15 +00:00
|
|
|
|
return xelpd_dsc_compute_link_config(intel_dp, connector, pipe_config, limits,
|
2023-08-17 14:24:57 +00:00
|
|
|
|
dsc_max_bpp, dsc_min_bpp, pipe_bpp, timeslots);
|
|
|
|
|
return icl_dsc_compute_link_config(intel_dp, pipe_config, limits,
|
|
|
|
|
dsc_max_bpp, dsc_min_bpp, pipe_bpp, timeslots);
|
|
|
|
|
}
|
|
|
|
|
|
2023-08-17 14:24:50 +00:00
|
|
|
|
static
|
|
|
|
|
u8 intel_dp_dsc_min_src_input_bpc(struct drm_i915_private *i915)
|
|
|
|
|
{
|
|
|
|
|
/* Min DSC Input BPC for ICL+ is 8 */
|
|
|
|
|
return HAS_DSC(i915) ? 8 : 0;
|
|
|
|
|
}
|
|
|
|
|
|
2023-08-17 14:24:51 +00:00
|
|
|
|
static
|
2023-08-17 14:24:55 +00:00
|
|
|
|
bool is_dsc_pipe_bpp_sufficient(struct drm_i915_private *i915,
|
|
|
|
|
struct drm_connector_state *conn_state,
|
|
|
|
|
struct link_config_limits *limits,
|
|
|
|
|
int pipe_bpp)
|
2023-08-17 14:24:51 +00:00
|
|
|
|
{
|
2023-08-17 14:24:55 +00:00
|
|
|
|
u8 dsc_max_bpc, dsc_min_bpc, dsc_max_pipe_bpp, dsc_min_pipe_bpp;
|
|
|
|
|
|
|
|
|
|
dsc_max_bpc = min(intel_dp_dsc_max_src_input_bpc(i915), conn_state->max_requested_bpc);
|
|
|
|
|
dsc_min_bpc = intel_dp_dsc_min_src_input_bpc(i915);
|
|
|
|
|
|
2023-09-21 19:51:50 +00:00
|
|
|
|
dsc_max_pipe_bpp = min(dsc_max_bpc * 3, limits->pipe.max_bpp);
|
|
|
|
|
dsc_min_pipe_bpp = max(dsc_min_bpc * 3, limits->pipe.min_bpp);
|
2023-08-17 14:24:55 +00:00
|
|
|
|
|
|
|
|
|
return pipe_bpp >= dsc_min_pipe_bpp &&
|
|
|
|
|
pipe_bpp <= dsc_max_pipe_bpp;
|
2023-08-17 14:24:51 +00:00
|
|
|
|
}
|
|
|
|
|
|
2023-08-17 14:24:54 +00:00
|
|
|
|
static
|
2023-08-17 14:24:55 +00:00
|
|
|
|
int intel_dp_force_dsc_pipe_bpp(struct intel_dp *intel_dp,
|
|
|
|
|
struct drm_connector_state *conn_state,
|
|
|
|
|
struct link_config_limits *limits)
|
2023-08-17 14:24:54 +00:00
|
|
|
|
{
|
|
|
|
|
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
|
|
|
|
|
int forced_bpp;
|
|
|
|
|
|
|
|
|
|
if (!intel_dp->force_dsc_bpc)
|
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
|
|
forced_bpp = intel_dp->force_dsc_bpc * 3;
|
|
|
|
|
|
2023-08-17 14:24:55 +00:00
|
|
|
|
if (is_dsc_pipe_bpp_sufficient(i915, conn_state, limits, forced_bpp)) {
|
2023-08-17 14:24:54 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm, "Input DSC BPC forced to %d\n", intel_dp->force_dsc_bpc);
|
|
|
|
|
return forced_bpp;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
drm_dbg_kms(&i915->drm, "Cannot force DSC BPC:%d, due to DSC BPC limits\n",
|
|
|
|
|
intel_dp->force_dsc_bpc);
|
|
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static int intel_dp_dsc_compute_pipe_bpp(struct intel_dp *intel_dp,
|
|
|
|
|
struct intel_crtc_state *pipe_config,
|
|
|
|
|
struct drm_connector_state *conn_state,
|
|
|
|
|
struct link_config_limits *limits,
|
|
|
|
|
int timeslots)
|
|
|
|
|
{
|
|
|
|
|
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
|
2023-10-06 13:37:17 +00:00
|
|
|
|
const struct intel_connector *connector =
|
|
|
|
|
to_intel_connector(conn_state->connector);
|
2023-08-17 14:24:57 +00:00
|
|
|
|
u8 max_req_bpc = conn_state->max_requested_bpc;
|
|
|
|
|
u8 dsc_max_bpc, dsc_max_bpp;
|
|
|
|
|
u8 dsc_min_bpc, dsc_min_bpp;
|
2023-10-12 12:24:37 +00:00
|
|
|
|
u8 dsc_bpc[3] = {};
|
2023-08-17 14:24:54 +00:00
|
|
|
|
int forced_bpp, pipe_bpp;
|
2023-08-17 14:24:57 +00:00
|
|
|
|
int num_bpc, i, ret;
|
2023-08-17 14:24:54 +00:00
|
|
|
|
|
2023-08-17 14:24:55 +00:00
|
|
|
|
forced_bpp = intel_dp_force_dsc_pipe_bpp(intel_dp, conn_state, limits);
|
2023-08-17 14:24:54 +00:00
|
|
|
|
|
|
|
|
|
if (forced_bpp) {
|
2023-10-06 13:37:17 +00:00
|
|
|
|
ret = dsc_compute_compressed_bpp(intel_dp, connector, pipe_config,
|
2023-08-17 14:24:57 +00:00
|
|
|
|
limits, forced_bpp, timeslots);
|
|
|
|
|
if (ret == 0) {
|
|
|
|
|
pipe_config->pipe_bpp = forced_bpp;
|
|
|
|
|
return 0;
|
2023-08-17 14:24:54 +00:00
|
|
|
|
}
|
|
|
|
|
}
|
2023-08-17 14:24:57 +00:00
|
|
|
|
|
2023-12-13 09:16:29 +00:00
|
|
|
|
dsc_max_bpc = intel_dp_dsc_max_src_input_bpc(i915);
|
2023-08-17 14:24:57 +00:00
|
|
|
|
if (!dsc_max_bpc)
|
2023-08-17 14:24:54 +00:00
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
2023-08-17 14:24:57 +00:00
|
|
|
|
dsc_max_bpc = min_t(u8, dsc_max_bpc, max_req_bpc);
|
2023-09-21 19:51:50 +00:00
|
|
|
|
dsc_max_bpp = min(dsc_max_bpc * 3, limits->pipe.max_bpp);
|
2023-08-17 14:24:54 +00:00
|
|
|
|
|
2023-08-17 14:24:57 +00:00
|
|
|
|
dsc_min_bpc = intel_dp_dsc_min_src_input_bpc(i915);
|
2023-09-21 19:51:50 +00:00
|
|
|
|
dsc_min_bpp = max(dsc_min_bpc * 3, limits->pipe.min_bpp);
|
2023-08-17 14:24:54 +00:00
|
|
|
|
|
2023-08-17 14:24:57 +00:00
|
|
|
|
/*
|
|
|
|
|
* Get the maximum DSC bpc that will be supported by any valid
|
|
|
|
|
* link configuration and compressed bpp.
|
|
|
|
|
*/
|
2023-10-06 13:37:18 +00:00
|
|
|
|
num_bpc = drm_dp_dsc_sink_supported_input_bpcs(connector->dp.dsc_dpcd, dsc_bpc);
|
2023-08-17 14:24:57 +00:00
|
|
|
|
for (i = 0; i < num_bpc; i++) {
|
|
|
|
|
pipe_bpp = dsc_bpc[i] * 3;
|
|
|
|
|
if (pipe_bpp < dsc_min_bpp)
|
|
|
|
|
break;
|
|
|
|
|
if (pipe_bpp > dsc_max_bpp)
|
|
|
|
|
continue;
|
2023-10-06 13:37:17 +00:00
|
|
|
|
ret = dsc_compute_compressed_bpp(intel_dp, connector, pipe_config,
|
2023-08-17 14:24:57 +00:00
|
|
|
|
limits, pipe_bpp, timeslots);
|
|
|
|
|
if (ret == 0) {
|
|
|
|
|
pipe_config->pipe_bpp = pipe_bpp;
|
|
|
|
|
return 0;
|
|
|
|
|
}
|
|
|
|
|
}
|
2023-08-17 14:24:54 +00:00
|
|
|
|
|
2023-08-17 14:24:57 +00:00
|
|
|
|
return -EINVAL;
|
2023-08-17 14:24:54 +00:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static int intel_edp_dsc_compute_pipe_bpp(struct intel_dp *intel_dp,
|
|
|
|
|
struct intel_crtc_state *pipe_config,
|
|
|
|
|
struct drm_connector_state *conn_state,
|
|
|
|
|
struct link_config_limits *limits)
|
|
|
|
|
{
|
|
|
|
|
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
|
2023-10-06 13:37:14 +00:00
|
|
|
|
struct intel_connector *connector =
|
|
|
|
|
to_intel_connector(conn_state->connector);
|
2023-08-17 14:24:54 +00:00
|
|
|
|
int pipe_bpp, forced_bpp;
|
2023-08-17 14:24:58 +00:00
|
|
|
|
int dsc_src_min_bpp, dsc_sink_min_bpp, dsc_min_bpp;
|
|
|
|
|
int dsc_src_max_bpp, dsc_sink_max_bpp, dsc_max_bpp;
|
2023-08-17 14:24:54 +00:00
|
|
|
|
|
2023-08-17 14:24:55 +00:00
|
|
|
|
forced_bpp = intel_dp_force_dsc_pipe_bpp(intel_dp, conn_state, limits);
|
2023-08-17 14:24:54 +00:00
|
|
|
|
|
|
|
|
|
if (forced_bpp) {
|
|
|
|
|
pipe_bpp = forced_bpp;
|
|
|
|
|
} else {
|
2023-09-21 19:51:50 +00:00
|
|
|
|
int max_bpc = min(limits->pipe.max_bpp / 3, (int)conn_state->max_requested_bpc);
|
2023-08-24 12:51:21 +00:00
|
|
|
|
|
2023-08-17 14:24:54 +00:00
|
|
|
|
/* For eDP use max bpp that can be supported with DSC. */
|
2023-10-06 13:37:14 +00:00
|
|
|
|
pipe_bpp = intel_dp_dsc_compute_max_bpp(connector, max_bpc);
|
2023-08-17 14:24:55 +00:00
|
|
|
|
if (!is_dsc_pipe_bpp_sufficient(i915, conn_state, limits, pipe_bpp)) {
|
2023-08-17 14:24:54 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm,
|
2023-08-17 14:24:55 +00:00
|
|
|
|
"Computed BPC is not in DSC BPC limits\n");
|
2023-08-17 14:24:54 +00:00
|
|
|
|
return -EINVAL;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
pipe_config->port_clock = limits->max_rate;
|
|
|
|
|
pipe_config->lane_count = limits->max_lane_count;
|
2023-08-17 14:24:58 +00:00
|
|
|
|
|
|
|
|
|
dsc_src_min_bpp = dsc_src_min_compressed_bpp();
|
2023-10-24 01:09:25 +00:00
|
|
|
|
dsc_sink_min_bpp = intel_dp_dsc_sink_min_compressed_bpp(pipe_config);
|
2023-08-17 14:24:58 +00:00
|
|
|
|
dsc_min_bpp = max(dsc_src_min_bpp, dsc_sink_min_bpp);
|
2023-09-21 19:51:53 +00:00
|
|
|
|
dsc_min_bpp = max(dsc_min_bpp, to_bpp_int_roundup(limits->link.min_bpp_x16));
|
2023-08-17 14:24:58 +00:00
|
|
|
|
|
|
|
|
|
dsc_src_max_bpp = dsc_src_max_compressed_bpp(intel_dp);
|
2023-10-24 01:09:25 +00:00
|
|
|
|
dsc_sink_max_bpp = intel_dp_dsc_sink_max_compressed_bpp(connector,
|
|
|
|
|
pipe_config,
|
|
|
|
|
pipe_bpp / 3);
|
2023-08-17 14:24:58 +00:00
|
|
|
|
dsc_max_bpp = dsc_sink_max_bpp ? min(dsc_sink_max_bpp, dsc_src_max_bpp) : dsc_src_max_bpp;
|
2023-09-21 19:51:53 +00:00
|
|
|
|
dsc_max_bpp = min(dsc_max_bpp, to_bpp_int(limits->link.max_bpp_x16));
|
2023-08-17 14:24:58 +00:00
|
|
|
|
|
|
|
|
|
/* Compressed BPP should be less than the Input DSC bpp */
|
|
|
|
|
dsc_max_bpp = min(dsc_max_bpp, pipe_bpp - 1);
|
|
|
|
|
|
2023-11-10 10:10:11 +00:00
|
|
|
|
pipe_config->dsc.compressed_bpp_x16 =
|
|
|
|
|
to_bpp_x16(max(dsc_min_bpp, dsc_max_bpp));
|
2023-08-17 14:24:54 +00:00
|
|
|
|
|
|
|
|
|
pipe_config->pipe_bpp = pipe_bpp;
|
|
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
}
|
|
|
|
|
|
2022-11-01 09:42:20 +00:00
|
|
|
|
int intel_dp_dsc_compute_config(struct intel_dp *intel_dp,
|
|
|
|
|
struct intel_crtc_state *pipe_config,
|
|
|
|
|
struct drm_connector_state *conn_state,
|
|
|
|
|
struct link_config_limits *limits,
|
2022-11-23 10:07:18 +00:00
|
|
|
|
int timeslots,
|
|
|
|
|
bool compute_pipe_bpp)
|
2018-11-28 21:36:21 +00:00
|
|
|
|
{
|
|
|
|
|
struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
|
|
|
|
|
struct drm_i915_private *dev_priv = to_i915(dig_port->base.base.dev);
|
2023-10-06 13:37:15 +00:00
|
|
|
|
const struct intel_connector *connector =
|
|
|
|
|
to_intel_connector(conn_state->connector);
|
2020-03-19 16:38:44 +00:00
|
|
|
|
const struct drm_display_mode *adjusted_mode =
|
|
|
|
|
&pipe_config->hw.adjusted_mode;
|
2019-01-15 20:08:00 +00:00
|
|
|
|
int ret;
|
2018-11-28 21:36:21 +00:00
|
|
|
|
|
2023-10-24 01:09:23 +00:00
|
|
|
|
pipe_config->fec_enable = pipe_config->fec_enable ||
|
|
|
|
|
(!intel_dp_is_edp(intel_dp) &&
|
|
|
|
|
intel_dp_supports_fec(intel_dp, connector, pipe_config));
|
2019-03-26 14:49:02 +00:00
|
|
|
|
|
2023-10-06 13:37:16 +00:00
|
|
|
|
if (!intel_dp_supports_dsc(connector, pipe_config))
|
2019-01-15 20:08:00 +00:00
|
|
|
|
return -EINVAL;
|
2018-11-28 21:36:21 +00:00
|
|
|
|
|
2023-10-06 13:37:22 +00:00
|
|
|
|
if (!intel_dp_dsc_supports_format(connector, pipe_config->output_format))
|
2023-03-09 06:28:50 +00:00
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
2018-11-28 21:36:21 +00:00
|
|
|
|
/*
|
2023-08-17 14:24:54 +00:00
|
|
|
|
* compute pipe bpp is set to false for DP MST DSC case
|
|
|
|
|
* and compressed_bpp is calculated same time once
|
|
|
|
|
* vpci timeslots are allocated, because overall bpp
|
|
|
|
|
* calculation procedure is bit different for MST case.
|
2018-11-28 21:36:21 +00:00
|
|
|
|
*/
|
2023-08-17 14:24:51 +00:00
|
|
|
|
if (compute_pipe_bpp) {
|
2023-08-17 14:24:54 +00:00
|
|
|
|
if (intel_dp_is_edp(intel_dp))
|
|
|
|
|
ret = intel_edp_dsc_compute_pipe_bpp(intel_dp, pipe_config,
|
|
|
|
|
conn_state, limits);
|
|
|
|
|
else
|
|
|
|
|
ret = intel_dp_dsc_compute_pipe_bpp(intel_dp, pipe_config,
|
|
|
|
|
conn_state, limits, timeslots);
|
|
|
|
|
if (ret) {
|
|
|
|
|
drm_dbg_kms(&dev_priv->drm,
|
|
|
|
|
"No Valid pipe bpp for given mode ret = %d\n", ret);
|
|
|
|
|
return ret;
|
2023-08-17 14:24:51 +00:00
|
|
|
|
}
|
2018-11-28 21:36:21 +00:00
|
|
|
|
}
|
|
|
|
|
|
2023-08-17 14:24:54 +00:00
|
|
|
|
/* Calculate Slice count */
|
2018-11-28 21:36:21 +00:00
|
|
|
|
if (intel_dp_is_edp(intel_dp)) {
|
2019-10-22 13:34:13 +00:00
|
|
|
|
pipe_config->dsc.slice_count =
|
2023-10-06 13:37:25 +00:00
|
|
|
|
drm_dp_dsc_sink_max_slice_count(connector->dp.dsc_dpcd,
|
2018-11-28 21:36:21 +00:00
|
|
|
|
true);
|
2023-04-18 14:04:30 +00:00
|
|
|
|
if (!pipe_config->dsc.slice_count) {
|
|
|
|
|
drm_dbg_kms(&dev_priv->drm, "Unsupported Slice Count %d\n",
|
|
|
|
|
pipe_config->dsc.slice_count);
|
|
|
|
|
return -EINVAL;
|
|
|
|
|
}
|
2018-11-28 21:36:21 +00:00
|
|
|
|
} else {
|
|
|
|
|
u8 dsc_dp_slice_count;
|
|
|
|
|
|
|
|
|
|
dsc_dp_slice_count =
|
2023-10-06 13:37:23 +00:00
|
|
|
|
intel_dp_dsc_get_slice_count(connector,
|
2018-11-28 21:36:21 +00:00
|
|
|
|
adjusted_mode->crtc_clock,
|
2020-11-17 19:47:05 +00:00
|
|
|
|
adjusted_mode->crtc_hdisplay,
|
2022-02-23 13:13:13 +00:00
|
|
|
|
pipe_config->bigjoiner_pipes);
|
2022-11-23 10:07:18 +00:00
|
|
|
|
if (!dsc_dp_slice_count) {
|
drm/i915/dp: conversion to struct drm_device logging macros.
This converts various instances of printk based logging macros in
i915/display/intel_dp.c with the new struct drm_device based logging
macros using the following coccinelle script:
@rule1@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@rule2@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
New checkpatch warnings were fixed manually.
v2: fix merge conflict with new changes in file.
Signed-off-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200122110844.2022-5-wambui.karugax@gmail.com
2020-01-22 11:08:42 +00:00
|
|
|
|
drm_dbg_kms(&dev_priv->drm,
|
2022-11-23 10:07:18 +00:00
|
|
|
|
"Compressed Slice Count not supported\n");
|
2019-01-15 20:08:00 +00:00
|
|
|
|
return -EINVAL;
|
2018-11-28 21:36:21 +00:00
|
|
|
|
}
|
2022-11-23 10:07:18 +00:00
|
|
|
|
|
2019-10-22 13:34:13 +00:00
|
|
|
|
pipe_config->dsc.slice_count = dsc_dp_slice_count;
|
2018-11-28 21:36:21 +00:00
|
|
|
|
}
|
|
|
|
|
/*
|
|
|
|
|
* VDSC engine operates at 1 Pixel per clock, so if peak pixel rate
|
|
|
|
|
* is greater than the maximum Cdclock and if slice count is even
|
|
|
|
|
* then we need to use 2 VDSC instances.
|
|
|
|
|
*/
|
drm/i915: Ensure DSC has enough BW and stays within HW limits
We currently have an issue with some BPPs when using DSC.
According to the HW team, the reason is that a single VDSC engine
instance has some BW limitations that must be accounted for.
So, whenever we approach around 90% of the CDCLK, a second VDSC engine
has to be used.
This always means using two slices. However, in our current code,
the amount of slices is calculated independently of whether
we need to enable the second VDSC engine or not.
This leads to some logical issues when, according to the pixel clock needs,
we need to enable the second VDSC engine.
But as we calculated previously that we can only use a single slice,
we can't do that and fail.
So, we need to fix that so that the number of VDSC engines enabled
should depend on the number of slices, and the number of slices
should also depend on BW requirements.
Lastly, we didn't have BPP limitation for ADLP/MTL/DG2 implemented,
which says that DSC output BPPs can only be chosen within the range of 8 to 27
(BSpec 49259).
All of this applied together allows us to fix existing FIFO underruns,
which we have in many DSC tests.
v2: - Replace min with clamp_t(Jani Nikula)
- Fix commit message(Swati Sharma)
- Added "Closes"(Swati Sharma)
BSpec: 49259
HSDES: 18027167222
Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/8231
Signed-off-by: Stanislav Lisovskiy <stanislav.lisovskiy@intel.com>
Reviewed-by: Vinod Govindapillai <vinod.govindapillai@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20230306080401.22552-1-stanislav.lisovskiy@intel.com
2023-03-06 08:04:01 +00:00
|
|
|
|
if (pipe_config->bigjoiner_pipes || pipe_config->dsc.slice_count > 1)
|
|
|
|
|
pipe_config->dsc.dsc_split = true;
|
2019-01-15 20:08:00 +00:00
|
|
|
|
|
2023-10-06 13:37:21 +00:00
|
|
|
|
ret = intel_dp_dsc_compute_params(connector, pipe_config);
|
2019-01-15 20:08:00 +00:00
|
|
|
|
if (ret < 0) {
|
drm/i915/dp: conversion to struct drm_device logging macros.
This converts various instances of printk based logging macros in
i915/display/intel_dp.c with the new struct drm_device based logging
macros using the following coccinelle script:
@rule1@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@rule2@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
New checkpatch warnings were fixed manually.
v2: fix merge conflict with new changes in file.
Signed-off-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200122110844.2022-5-wambui.karugax@gmail.com
2020-01-22 11:08:42 +00:00
|
|
|
|
drm_dbg_kms(&dev_priv->drm,
|
2023-11-10 10:10:11 +00:00
|
|
|
|
"Cannot compute valid DSC parameters for Input Bpp = %d"
|
|
|
|
|
"Compressed BPP = " BPP_X16_FMT "\n",
|
drm/i915/dp: conversion to struct drm_device logging macros.
This converts various instances of printk based logging macros in
i915/display/intel_dp.c with the new struct drm_device based logging
macros using the following coccinelle script:
@rule1@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@rule2@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
New checkpatch warnings were fixed manually.
v2: fix merge conflict with new changes in file.
Signed-off-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200122110844.2022-5-wambui.karugax@gmail.com
2020-01-22 11:08:42 +00:00
|
|
|
|
pipe_config->pipe_bpp,
|
2023-11-10 10:10:11 +00:00
|
|
|
|
BPP_X16_ARGS(pipe_config->dsc.compressed_bpp_x16));
|
2019-01-15 20:08:00 +00:00
|
|
|
|
return ret;
|
2018-11-29 19:38:27 +00:00
|
|
|
|
}
|
2019-01-15 20:08:00 +00:00
|
|
|
|
|
2019-10-22 13:34:13 +00:00
|
|
|
|
pipe_config->dsc.compression_enable = true;
|
drm/i915/dp: conversion to struct drm_device logging macros.
This converts various instances of printk based logging macros in
i915/display/intel_dp.c with the new struct drm_device based logging
macros using the following coccinelle script:
@rule1@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@rule2@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
New checkpatch warnings were fixed manually.
v2: fix merge conflict with new changes in file.
Signed-off-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200122110844.2022-5-wambui.karugax@gmail.com
2020-01-22 11:08:42 +00:00
|
|
|
|
drm_dbg_kms(&dev_priv->drm, "DP DSC computed with Input Bpp = %d "
|
2023-11-10 10:10:11 +00:00
|
|
|
|
"Compressed Bpp = " BPP_X16_FMT " Slice Count = %d\n",
|
drm/i915/dp: conversion to struct drm_device logging macros.
This converts various instances of printk based logging macros in
i915/display/intel_dp.c with the new struct drm_device based logging
macros using the following coccinelle script:
@rule1@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@rule2@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
New checkpatch warnings were fixed manually.
v2: fix merge conflict with new changes in file.
Signed-off-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200122110844.2022-5-wambui.karugax@gmail.com
2020-01-22 11:08:42 +00:00
|
|
|
|
pipe_config->pipe_bpp,
|
2023-11-10 10:10:11 +00:00
|
|
|
|
BPP_X16_ARGS(pipe_config->dsc.compressed_bpp_x16),
|
drm/i915/dp: conversion to struct drm_device logging macros.
This converts various instances of printk based logging macros in
i915/display/intel_dp.c with the new struct drm_device based logging
macros using the following coccinelle script:
@rule1@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@rule2@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
New checkpatch warnings were fixed manually.
v2: fix merge conflict with new changes in file.
Signed-off-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200122110844.2022-5-wambui.karugax@gmail.com
2020-01-22 11:08:42 +00:00
|
|
|
|
pipe_config->dsc.slice_count);
|
2018-11-28 21:36:21 +00:00
|
|
|
|
|
2019-01-15 20:08:00 +00:00
|
|
|
|
return 0;
|
2018-11-28 21:36:21 +00:00
|
|
|
|
}
|
|
|
|
|
|
2023-09-21 19:51:52 +00:00
|
|
|
|
/**
|
|
|
|
|
* intel_dp_compute_config_link_bpp_limits - compute output link bpp limits
|
|
|
|
|
* @intel_dp: intel DP
|
|
|
|
|
* @crtc_state: crtc state
|
|
|
|
|
* @dsc: DSC compression mode
|
|
|
|
|
* @limits: link configuration limits
|
|
|
|
|
*
|
|
|
|
|
* Calculates the output link min, max bpp values in @limits based on the
|
|
|
|
|
* pipe bpp range, @crtc_state and @dsc mode.
|
|
|
|
|
*
|
|
|
|
|
* Returns %true in case of success.
|
|
|
|
|
*/
|
|
|
|
|
bool
|
|
|
|
|
intel_dp_compute_config_link_bpp_limits(struct intel_dp *intel_dp,
|
|
|
|
|
const struct intel_crtc_state *crtc_state,
|
|
|
|
|
bool dsc,
|
|
|
|
|
struct link_config_limits *limits)
|
2009-04-07 23:16:42 +00:00
|
|
|
|
{
|
2023-09-21 19:51:52 +00:00
|
|
|
|
struct drm_i915_private *i915 = to_i915(crtc_state->uapi.crtc->dev);
|
2020-03-19 16:38:44 +00:00
|
|
|
|
const struct drm_display_mode *adjusted_mode =
|
2023-09-21 19:51:52 +00:00
|
|
|
|
&crtc_state->hw.adjusted_mode;
|
|
|
|
|
const struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
|
|
|
|
|
const struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
|
|
|
|
|
int max_link_bpp_x16;
|
2018-04-26 08:25:28 +00:00
|
|
|
|
|
2023-09-21 19:51:58 +00:00
|
|
|
|
max_link_bpp_x16 = min(crtc_state->max_link_bpp_x16,
|
|
|
|
|
to_bpp_x16(limits->pipe.max_bpp));
|
2018-04-26 08:25:28 +00:00
|
|
|
|
|
2023-09-21 19:51:52 +00:00
|
|
|
|
if (!dsc) {
|
|
|
|
|
max_link_bpp_x16 = rounddown(max_link_bpp_x16, to_bpp_x16(2 * 3));
|
2018-04-26 08:25:28 +00:00
|
|
|
|
|
2023-09-21 19:51:52 +00:00
|
|
|
|
if (max_link_bpp_x16 < to_bpp_x16(limits->pipe.min_bpp))
|
|
|
|
|
return false;
|
|
|
|
|
|
|
|
|
|
limits->link.min_bpp_x16 = to_bpp_x16(limits->pipe.min_bpp);
|
|
|
|
|
} else {
|
|
|
|
|
/*
|
|
|
|
|
* TODO: set the DSC link limits already here, atm these are
|
|
|
|
|
* initialized only later in intel_edp_dsc_compute_pipe_bpp() /
|
|
|
|
|
* intel_dp_dsc_compute_pipe_bpp()
|
|
|
|
|
*/
|
|
|
|
|
limits->link.min_bpp_x16 = 0;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
limits->link.max_bpp_x16 = max_link_bpp_x16;
|
|
|
|
|
|
|
|
|
|
drm_dbg_kms(&i915->drm,
|
|
|
|
|
"[ENCODER:%d:%s][CRTC:%d:%s] DP link limits: pixel clock %d kHz DSC %s max lanes %d max rate %d max pipe_bpp %d max link_bpp " BPP_X16_FMT "\n",
|
|
|
|
|
encoder->base.base.id, encoder->base.name,
|
|
|
|
|
crtc->base.base.id, crtc->base.name,
|
|
|
|
|
adjusted_mode->crtc_clock,
|
|
|
|
|
dsc ? "on" : "off",
|
|
|
|
|
limits->max_lane_count,
|
|
|
|
|
limits->max_rate,
|
|
|
|
|
limits->pipe.max_bpp,
|
|
|
|
|
BPP_X16_ARGS(limits->link.max_bpp_x16));
|
|
|
|
|
|
|
|
|
|
return true;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static bool
|
2023-09-21 19:51:49 +00:00
|
|
|
|
intel_dp_compute_config_limits(struct intel_dp *intel_dp,
|
|
|
|
|
struct intel_crtc_state *crtc_state,
|
|
|
|
|
bool respect_downstream_limits,
|
2023-09-21 19:51:52 +00:00
|
|
|
|
bool dsc,
|
2023-09-21 19:51:49 +00:00
|
|
|
|
struct link_config_limits *limits)
|
2009-04-07 23:16:42 +00:00
|
|
|
|
{
|
2023-09-21 19:51:49 +00:00
|
|
|
|
limits->min_rate = intel_dp_common_rate(intel_dp, 0);
|
|
|
|
|
limits->max_rate = intel_dp_max_link_rate(intel_dp);
|
2018-04-26 08:25:28 +00:00
|
|
|
|
|
2024-02-08 15:45:52 +00:00
|
|
|
|
/* FIXME 128b/132b SST support missing */
|
|
|
|
|
limits->max_rate = min(limits->max_rate, 810000);
|
|
|
|
|
|
2023-09-21 19:51:49 +00:00
|
|
|
|
limits->min_lane_count = 1;
|
|
|
|
|
limits->max_lane_count = intel_dp_max_lane_count(intel_dp);
|
2018-04-26 08:25:28 +00:00
|
|
|
|
|
2023-09-21 19:51:50 +00:00
|
|
|
|
limits->pipe.min_bpp = intel_dp_min_bpp(crtc_state->output_format);
|
|
|
|
|
limits->pipe.max_bpp = intel_dp_max_bpp(intel_dp, crtc_state,
|
|
|
|
|
respect_downstream_limits);
|
2009-04-07 23:16:42 +00:00
|
|
|
|
|
2021-01-07 18:20:25 +00:00
|
|
|
|
if (intel_dp->use_max_params) {
|
2014-09-09 08:25:13 +00:00
|
|
|
|
/*
|
|
|
|
|
* Use the maximum clock and number of lanes the eDP panel
|
2021-01-07 18:20:25 +00:00
|
|
|
|
* advertizes being capable of in case the initial fast
|
|
|
|
|
* optimal params failed us. The panels are generally
|
2019-04-05 07:52:20 +00:00
|
|
|
|
* designed to support only a single clock and lane
|
2021-01-07 18:20:25 +00:00
|
|
|
|
* configuration, and typically on older panels these
|
|
|
|
|
* values correspond to the native resolution of the panel.
|
2014-09-09 08:25:13 +00:00
|
|
|
|
*/
|
2023-09-21 19:51:49 +00:00
|
|
|
|
limits->min_lane_count = limits->max_lane_count;
|
|
|
|
|
limits->min_rate = limits->max_rate;
|
2013-07-18 14:44:13 +00:00
|
|
|
|
}
|
2013-05-04 08:09:18 +00:00
|
|
|
|
|
2023-09-21 19:51:49 +00:00
|
|
|
|
intel_dp_adjust_compliance_config(intel_dp, crtc_state, limits);
|
2018-04-26 08:25:30 +00:00
|
|
|
|
|
2023-09-21 19:51:52 +00:00
|
|
|
|
return intel_dp_compute_config_link_bpp_limits(intel_dp,
|
|
|
|
|
crtc_state,
|
|
|
|
|
dsc,
|
|
|
|
|
limits);
|
2023-09-21 19:51:49 +00:00
|
|
|
|
}
|
2018-04-26 08:25:30 +00:00
|
|
|
|
|
2023-09-21 19:51:49 +00:00
|
|
|
|
static int
|
|
|
|
|
intel_dp_compute_link_config(struct intel_encoder *encoder,
|
|
|
|
|
struct intel_crtc_state *pipe_config,
|
|
|
|
|
struct drm_connector_state *conn_state,
|
|
|
|
|
bool respect_downstream_limits)
|
|
|
|
|
{
|
|
|
|
|
struct drm_i915_private *i915 = to_i915(encoder->base.dev);
|
|
|
|
|
struct intel_crtc *crtc = to_intel_crtc(pipe_config->uapi.crtc);
|
2023-10-24 01:09:23 +00:00
|
|
|
|
const struct intel_connector *connector =
|
|
|
|
|
to_intel_connector(conn_state->connector);
|
2023-09-21 19:51:49 +00:00
|
|
|
|
const struct drm_display_mode *adjusted_mode =
|
|
|
|
|
&pipe_config->hw.adjusted_mode;
|
|
|
|
|
struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
|
|
|
|
|
struct link_config_limits limits;
|
|
|
|
|
bool joiner_needs_dsc = false;
|
2023-09-21 19:51:51 +00:00
|
|
|
|
bool dsc_needed;
|
|
|
|
|
int ret = 0;
|
2018-04-26 08:25:28 +00:00
|
|
|
|
|
2023-10-24 01:09:23 +00:00
|
|
|
|
if (pipe_config->fec_enable &&
|
|
|
|
|
!intel_dp_supports_fec(intel_dp, connector, pipe_config))
|
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
2021-09-13 14:44:27 +00:00
|
|
|
|
if (intel_dp_need_bigjoiner(intel_dp, adjusted_mode->crtc_hdisplay,
|
|
|
|
|
adjusted_mode->crtc_clock))
|
2022-02-23 13:13:13 +00:00
|
|
|
|
pipe_config->bigjoiner_pipes = GENMASK(crtc->pipe + 1, crtc->pipe);
|
2020-11-17 19:47:06 +00:00
|
|
|
|
|
2022-03-30 09:30:19 +00:00
|
|
|
|
/*
|
|
|
|
|
* Pipe joiner needs compression up to display 12 due to bandwidth
|
|
|
|
|
* limitation. DG2 onwards pipe joiner can be enabled without
|
|
|
|
|
* compression.
|
|
|
|
|
*/
|
|
|
|
|
joiner_needs_dsc = DISPLAY_VER(i915) < 13 && pipe_config->bigjoiner_pipes;
|
|
|
|
|
|
2023-09-21 19:51:52 +00:00
|
|
|
|
dsc_needed = joiner_needs_dsc || intel_dp->force_dsc_en ||
|
|
|
|
|
!intel_dp_compute_config_limits(intel_dp, pipe_config,
|
|
|
|
|
respect_downstream_limits,
|
|
|
|
|
false,
|
|
|
|
|
&limits);
|
2018-11-28 21:36:21 +00:00
|
|
|
|
|
2023-09-21 19:51:51 +00:00
|
|
|
|
if (!dsc_needed) {
|
|
|
|
|
/*
|
|
|
|
|
* Optimize for slow and wide for everything, because there are some
|
|
|
|
|
* eDP 1.3 and 1.4 panels don't work well with fast and narrow.
|
|
|
|
|
*/
|
|
|
|
|
ret = intel_dp_compute_link_config_wide(intel_dp, pipe_config,
|
|
|
|
|
conn_state, &limits);
|
|
|
|
|
if (ret)
|
|
|
|
|
dsc_needed = true;
|
|
|
|
|
}
|
2018-11-28 21:36:21 +00:00
|
|
|
|
|
2023-09-21 19:51:51 +00:00
|
|
|
|
if (dsc_needed) {
|
2022-03-30 09:30:19 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm, "Try DSC (fallback=%s, joiner=%s, force=%s)\n",
|
|
|
|
|
str_yes_no(ret), str_yes_no(joiner_needs_dsc),
|
|
|
|
|
str_yes_no(intel_dp->force_dsc_en));
|
2023-09-21 19:51:52 +00:00
|
|
|
|
|
|
|
|
|
if (!intel_dp_compute_config_limits(intel_dp, pipe_config,
|
|
|
|
|
respect_downstream_limits,
|
|
|
|
|
true,
|
|
|
|
|
&limits))
|
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
2019-01-15 20:08:00 +00:00
|
|
|
|
ret = intel_dp_dsc_compute_config(intel_dp, pipe_config,
|
2023-01-09 14:02:10 +00:00
|
|
|
|
conn_state, &limits, 64, true);
|
2019-01-15 20:08:00 +00:00
|
|
|
|
if (ret < 0)
|
|
|
|
|
return ret;
|
drm/i915/dp: optimize eDP 1.4+ link config fast and narrow
We've opted to use the maximum link rate and lane count for eDP panels,
because typically the maximum supported configuration reported by the
panel has matched the native resolution requirements of the panel, and
optimizing the link has lead to problems.
With eDP 1.4 rate select method and DSC features, this is decreasingly
the case. There's a need to optimize the link parameters. Moreover,
already eDP 1.3 states fast link with fewer lanes is preferred over the
wide and slow. (Wide and slow should still be more reliable for longer
cable lengths.)
Additionally, there have been reports of panels failing on arbitrary
link configurations, although arguably all configurations they claim to
support should work.
Optimize eDP 1.4+ link config fast and narrow.
Side note: The implementation has a near duplicate of the link config
function, with just the two inner for loops turned inside out. Perhaps
there'd be a way to make this, say, more table driven to reduce the
duplication, but seems like that would lead to duplication in the table
generation. We'll also have to see how the link config optimization for
DSC turns out.
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Manasi Navare <manasi.d.navare@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Matt Atwood <matthew.s.atwood@intel.com>
Cc: "Lee, Shawn C" <shawn.c.lee@intel.com>
Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Reviewed-by: Manasi Navare <manasi.d.navare@intel.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=105267
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180905095321.13843-1-jani.nikula@intel.com
2018-09-05 09:53:21 +00:00
|
|
|
|
}
|
2018-04-26 08:25:26 +00:00
|
|
|
|
|
2019-10-22 13:34:13 +00:00
|
|
|
|
if (pipe_config->dsc.compression_enable) {
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm,
|
2023-11-10 10:10:11 +00:00
|
|
|
|
"DP lane count %d clock %d Input bpp %d Compressed bpp " BPP_X16_FMT "\n",
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
pipe_config->lane_count, pipe_config->port_clock,
|
|
|
|
|
pipe_config->pipe_bpp,
|
2023-11-10 10:10:11 +00:00
|
|
|
|
BPP_X16_ARGS(pipe_config->dsc.compressed_bpp_x16));
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
|
|
|
|
|
drm_dbg_kms(&i915->drm,
|
|
|
|
|
"DP link rate required %i available %i\n",
|
|
|
|
|
intel_dp_link_required(adjusted_mode->crtc_clock,
|
2023-11-10 10:10:11 +00:00
|
|
|
|
to_bpp_int_roundup(pipe_config->dsc.compressed_bpp_x16)),
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
intel_dp_max_data_rate(pipe_config->port_clock,
|
|
|
|
|
pipe_config->lane_count));
|
2018-11-28 21:36:21 +00:00
|
|
|
|
} else {
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm, "DP lane count %d clock %d bpp %d\n",
|
|
|
|
|
pipe_config->lane_count, pipe_config->port_clock,
|
|
|
|
|
pipe_config->pipe_bpp);
|
2018-11-28 21:36:21 +00:00
|
|
|
|
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm,
|
|
|
|
|
"DP link rate required %i available %i\n",
|
|
|
|
|
intel_dp_link_required(adjusted_mode->crtc_clock,
|
|
|
|
|
pipe_config->pipe_bpp),
|
|
|
|
|
intel_dp_max_data_rate(pipe_config->port_clock,
|
|
|
|
|
pipe_config->lane_count));
|
2018-11-28 21:36:21 +00:00
|
|
|
|
}
|
2019-01-15 20:08:00 +00:00
|
|
|
|
return 0;
|
2018-04-26 08:25:26 +00:00
|
|
|
|
}
|
|
|
|
|
|
2019-03-26 14:25:51 +00:00
|
|
|
|
bool intel_dp_limited_color_range(const struct intel_crtc_state *crtc_state,
|
|
|
|
|
const struct drm_connector_state *conn_state)
|
|
|
|
|
{
|
|
|
|
|
const struct intel_digital_connector_state *intel_conn_state =
|
|
|
|
|
to_intel_digital_connector_state(conn_state);
|
|
|
|
|
const struct drm_display_mode *adjusted_mode =
|
2019-10-31 11:26:02 +00:00
|
|
|
|
&crtc_state->hw.adjusted_mode;
|
2019-03-26 14:25:51 +00:00
|
|
|
|
|
2019-07-18 16:45:23 +00:00
|
|
|
|
/*
|
|
|
|
|
* Our YCbCr output is always limited range.
|
|
|
|
|
* crtc_state->limited_color_range only applies to RGB,
|
|
|
|
|
* and it must never be set for YCbCr or we risk setting
|
2023-02-13 22:52:50 +00:00
|
|
|
|
* some conflicting bits in TRANSCONF which will mess up
|
2019-07-18 16:45:23 +00:00
|
|
|
|
* the colors on the monitor.
|
|
|
|
|
*/
|
|
|
|
|
if (crtc_state->output_format != INTEL_OUTPUT_FORMAT_RGB)
|
|
|
|
|
return false;
|
|
|
|
|
|
2019-03-26 14:25:51 +00:00
|
|
|
|
if (intel_conn_state->broadcast_rgb == INTEL_BROADCAST_RGB_AUTO) {
|
|
|
|
|
/*
|
|
|
|
|
* See:
|
|
|
|
|
* CEA-861-E - 5.1 Default Encoding Parameters
|
|
|
|
|
* VESA DisplayPort Ver.1.2a - 5.1.1.1 Video Colorimetry
|
|
|
|
|
*/
|
|
|
|
|
return crtc_state->pipe_bpp != 18 &&
|
|
|
|
|
drm_default_rgb_quant_range(adjusted_mode) ==
|
|
|
|
|
HDMI_QUANTIZATION_RANGE_LIMITED;
|
|
|
|
|
} else {
|
|
|
|
|
return intel_conn_state->broadcast_rgb ==
|
|
|
|
|
INTEL_BROADCAST_RGB_LIMITED;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2019-11-25 12:53:12 +00:00
|
|
|
|
static bool intel_dp_port_has_audio(struct drm_i915_private *dev_priv,
|
|
|
|
|
enum port port)
|
|
|
|
|
{
|
|
|
|
|
if (IS_G4X(dev_priv))
|
|
|
|
|
return false;
|
2021-03-20 04:42:42 +00:00
|
|
|
|
if (DISPLAY_VER(dev_priv) < 12 && port == PORT_A)
|
2019-11-25 12:53:12 +00:00
|
|
|
|
return false;
|
|
|
|
|
|
|
|
|
|
return true;
|
|
|
|
|
}
|
|
|
|
|
|
2020-02-11 07:46:41 +00:00
|
|
|
|
static void intel_dp_compute_vsc_colorimetry(const struct intel_crtc_state *crtc_state,
|
|
|
|
|
const struct drm_connector_state *conn_state,
|
|
|
|
|
struct drm_dp_vsc_sdp *vsc)
|
|
|
|
|
{
|
|
|
|
|
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
|
|
|
|
|
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
|
|
|
|
|
|
2023-11-08 07:23:00 +00:00
|
|
|
|
if (crtc_state->has_panel_replay) {
|
|
|
|
|
/*
|
|
|
|
|
* Prepare VSC Header for SU as per DP 2.0 spec, Table 2-223
|
|
|
|
|
* VSC SDP supporting 3D stereo, Panel Replay, and Pixel
|
|
|
|
|
* Encoding/Colorimetry Format indication.
|
|
|
|
|
*/
|
|
|
|
|
vsc->revision = 0x7;
|
|
|
|
|
} else {
|
|
|
|
|
/*
|
|
|
|
|
* Prepare VSC Header for SU as per DP 1.4 spec, Table 2-118
|
|
|
|
|
* VSC SDP supporting 3D stereo, PSR2, and Pixel Encoding/
|
|
|
|
|
* Colorimetry Format indication.
|
|
|
|
|
*/
|
|
|
|
|
vsc->revision = 0x5;
|
|
|
|
|
}
|
|
|
|
|
|
2020-02-11 07:46:41 +00:00
|
|
|
|
vsc->length = 0x13;
|
|
|
|
|
|
|
|
|
|
/* DP 1.4a spec, Table 2-120 */
|
|
|
|
|
switch (crtc_state->output_format) {
|
|
|
|
|
case INTEL_OUTPUT_FORMAT_YCBCR444:
|
|
|
|
|
vsc->pixelformat = DP_PIXELFORMAT_YUV444;
|
|
|
|
|
break;
|
|
|
|
|
case INTEL_OUTPUT_FORMAT_YCBCR420:
|
|
|
|
|
vsc->pixelformat = DP_PIXELFORMAT_YUV420;
|
|
|
|
|
break;
|
|
|
|
|
case INTEL_OUTPUT_FORMAT_RGB:
|
|
|
|
|
default:
|
|
|
|
|
vsc->pixelformat = DP_PIXELFORMAT_RGB;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
switch (conn_state->colorspace) {
|
|
|
|
|
case DRM_MODE_COLORIMETRY_BT709_YCC:
|
|
|
|
|
vsc->colorimetry = DP_COLORIMETRY_BT709_YCC;
|
|
|
|
|
break;
|
|
|
|
|
case DRM_MODE_COLORIMETRY_XVYCC_601:
|
|
|
|
|
vsc->colorimetry = DP_COLORIMETRY_XVYCC_601;
|
|
|
|
|
break;
|
|
|
|
|
case DRM_MODE_COLORIMETRY_XVYCC_709:
|
|
|
|
|
vsc->colorimetry = DP_COLORIMETRY_XVYCC_709;
|
|
|
|
|
break;
|
|
|
|
|
case DRM_MODE_COLORIMETRY_SYCC_601:
|
|
|
|
|
vsc->colorimetry = DP_COLORIMETRY_SYCC_601;
|
|
|
|
|
break;
|
|
|
|
|
case DRM_MODE_COLORIMETRY_OPYCC_601:
|
|
|
|
|
vsc->colorimetry = DP_COLORIMETRY_OPYCC_601;
|
|
|
|
|
break;
|
|
|
|
|
case DRM_MODE_COLORIMETRY_BT2020_CYCC:
|
|
|
|
|
vsc->colorimetry = DP_COLORIMETRY_BT2020_CYCC;
|
|
|
|
|
break;
|
|
|
|
|
case DRM_MODE_COLORIMETRY_BT2020_RGB:
|
|
|
|
|
vsc->colorimetry = DP_COLORIMETRY_BT2020_RGB;
|
|
|
|
|
break;
|
|
|
|
|
case DRM_MODE_COLORIMETRY_BT2020_YCC:
|
|
|
|
|
vsc->colorimetry = DP_COLORIMETRY_BT2020_YCC;
|
|
|
|
|
break;
|
|
|
|
|
case DRM_MODE_COLORIMETRY_DCI_P3_RGB_D65:
|
|
|
|
|
case DRM_MODE_COLORIMETRY_DCI_P3_RGB_THEATER:
|
|
|
|
|
vsc->colorimetry = DP_COLORIMETRY_DCI_P3_RGB;
|
|
|
|
|
break;
|
|
|
|
|
default:
|
|
|
|
|
/*
|
|
|
|
|
* RGB->YCBCR color conversion uses the BT.709
|
|
|
|
|
* color space.
|
|
|
|
|
*/
|
|
|
|
|
if (crtc_state->output_format == INTEL_OUTPUT_FORMAT_YCBCR420)
|
|
|
|
|
vsc->colorimetry = DP_COLORIMETRY_BT709_YCC;
|
|
|
|
|
else
|
|
|
|
|
vsc->colorimetry = DP_COLORIMETRY_DEFAULT;
|
|
|
|
|
break;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
vsc->bpc = crtc_state->pipe_bpp / 3;
|
|
|
|
|
|
|
|
|
|
/* only RGB pixelformat supports 6 bpc */
|
|
|
|
|
drm_WARN_ON(&dev_priv->drm,
|
|
|
|
|
vsc->bpc == 6 && vsc->pixelformat != DP_PIXELFORMAT_RGB);
|
|
|
|
|
|
|
|
|
|
/* all YCbCr are always limited range */
|
|
|
|
|
vsc->dynamic_range = DP_DYNAMIC_RANGE_CTA;
|
|
|
|
|
vsc->content_type = DP_CONTENT_TYPE_NOT_DEFINED;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static void intel_dp_compute_vsc_sdp(struct intel_dp *intel_dp,
|
|
|
|
|
struct intel_crtc_state *crtc_state,
|
|
|
|
|
const struct drm_connector_state *conn_state)
|
|
|
|
|
{
|
|
|
|
|
struct drm_dp_vsc_sdp *vsc = &crtc_state->infoframes.vsc;
|
|
|
|
|
|
2020-05-14 06:07:31 +00:00
|
|
|
|
/* When a crtc state has PSR, VSC SDP will be handled by PSR routine */
|
|
|
|
|
if (crtc_state->has_psr)
|
2020-02-11 07:46:41 +00:00
|
|
|
|
return;
|
|
|
|
|
|
|
|
|
|
if (!intel_dp_needs_vsc_sdp(crtc_state, conn_state))
|
|
|
|
|
return;
|
|
|
|
|
|
|
|
|
|
crtc_state->infoframes.enable |= intel_hdmi_infoframe_enable(DP_SDP_VSC);
|
|
|
|
|
vsc->sdp_type = DP_SDP_VSC;
|
|
|
|
|
intel_dp_compute_vsc_colorimetry(crtc_state, conn_state,
|
|
|
|
|
&crtc_state->infoframes.vsc);
|
|
|
|
|
}
|
|
|
|
|
|
2020-05-14 06:07:31 +00:00
|
|
|
|
void intel_dp_compute_psr_vsc_sdp(struct intel_dp *intel_dp,
|
|
|
|
|
const struct intel_crtc_state *crtc_state,
|
|
|
|
|
const struct drm_connector_state *conn_state,
|
|
|
|
|
struct drm_dp_vsc_sdp *vsc)
|
|
|
|
|
{
|
|
|
|
|
vsc->sdp_type = DP_SDP_VSC;
|
|
|
|
|
|
2021-09-22 21:52:42 +00:00
|
|
|
|
if (crtc_state->has_psr2) {
|
2021-02-04 13:40:14 +00:00
|
|
|
|
if (intel_dp->psr.colorimetry_support &&
|
2020-05-14 06:07:31 +00:00
|
|
|
|
intel_dp_needs_vsc_sdp(crtc_state, conn_state)) {
|
|
|
|
|
/* [PSR2, +Colorimetry] */
|
|
|
|
|
intel_dp_compute_vsc_colorimetry(crtc_state, conn_state,
|
|
|
|
|
vsc);
|
|
|
|
|
} else {
|
|
|
|
|
/*
|
|
|
|
|
* [PSR2, -Colorimetry]
|
|
|
|
|
* Prepare VSC Header for SU as per eDP 1.4 spec, Table 6-11
|
|
|
|
|
* 3D stereo + PSR/PSR2 + Y-coordinate.
|
|
|
|
|
*/
|
|
|
|
|
vsc->revision = 0x4;
|
|
|
|
|
vsc->length = 0xe;
|
|
|
|
|
}
|
2023-11-08 07:23:00 +00:00
|
|
|
|
} else if (crtc_state->has_panel_replay) {
|
|
|
|
|
if (intel_dp->psr.colorimetry_support &&
|
|
|
|
|
intel_dp_needs_vsc_sdp(crtc_state, conn_state)) {
|
|
|
|
|
/* [Panel Replay with colorimetry info] */
|
|
|
|
|
intel_dp_compute_vsc_colorimetry(crtc_state, conn_state,
|
|
|
|
|
vsc);
|
|
|
|
|
} else {
|
|
|
|
|
/*
|
|
|
|
|
* [Panel Replay without colorimetry info]
|
|
|
|
|
* Prepare VSC Header for SU as per DP 2.0 spec, Table 2-223
|
|
|
|
|
* VSC SDP supporting 3D stereo + Panel Replay.
|
|
|
|
|
*/
|
|
|
|
|
vsc->revision = 0x6;
|
|
|
|
|
vsc->length = 0x10;
|
|
|
|
|
}
|
2020-05-14 06:07:31 +00:00
|
|
|
|
} else {
|
|
|
|
|
/*
|
|
|
|
|
* [PSR1]
|
|
|
|
|
* Prepare VSC Header for SU as per DP 1.4 spec, Table 2-118
|
|
|
|
|
* VSC SDP supporting 3D stereo + PSR (applies to eDP v1.3 or
|
|
|
|
|
* higher).
|
|
|
|
|
*/
|
|
|
|
|
vsc->revision = 0x2;
|
|
|
|
|
vsc->length = 0x8;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2020-02-11 07:46:42 +00:00
|
|
|
|
static void
|
|
|
|
|
intel_dp_compute_hdr_metadata_infoframe_sdp(struct intel_dp *intel_dp,
|
|
|
|
|
struct intel_crtc_state *crtc_state,
|
|
|
|
|
const struct drm_connector_state *conn_state)
|
|
|
|
|
{
|
|
|
|
|
int ret;
|
|
|
|
|
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
|
|
|
|
|
struct hdmi_drm_infoframe *drm_infoframe = &crtc_state->infoframes.drm.drm;
|
|
|
|
|
|
|
|
|
|
if (!conn_state->hdr_output_metadata)
|
|
|
|
|
return;
|
|
|
|
|
|
|
|
|
|
ret = drm_hdmi_infoframe_set_hdr_metadata(drm_infoframe, conn_state);
|
|
|
|
|
|
|
|
|
|
if (ret) {
|
|
|
|
|
drm_dbg_kms(&dev_priv->drm, "couldn't set HDR metadata in infoframe\n");
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
crtc_state->infoframes.enable |=
|
|
|
|
|
intel_hdmi_infoframe_enable(HDMI_PACKET_TYPE_GAMUT_METADATA);
|
|
|
|
|
}
|
|
|
|
|
|
2022-03-31 11:28:21 +00:00
|
|
|
|
static bool cpu_transcoder_has_drrs(struct drm_i915_private *i915,
|
|
|
|
|
enum transcoder cpu_transcoder)
|
|
|
|
|
{
|
2022-09-07 09:10:43 +00:00
|
|
|
|
if (HAS_DOUBLE_BUFFERED_M_N(i915))
|
2022-03-31 11:28:21 +00:00
|
|
|
|
return true;
|
|
|
|
|
|
|
|
|
|
return intel_cpu_transcoder_has_m2_n2(i915, cpu_transcoder);
|
|
|
|
|
}
|
|
|
|
|
|
2022-03-31 11:28:20 +00:00
|
|
|
|
static bool can_enable_drrs(struct intel_connector *connector,
|
|
|
|
|
const struct intel_crtc_state *pipe_config,
|
|
|
|
|
const struct drm_display_mode *downclock_mode)
|
|
|
|
|
{
|
2022-03-31 11:28:21 +00:00
|
|
|
|
struct drm_i915_private *i915 = to_i915(connector->base.dev);
|
|
|
|
|
|
2022-03-31 11:28:20 +00:00
|
|
|
|
if (pipe_config->vrr.enable)
|
|
|
|
|
return false;
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
|
* DRRS and PSR can't be enable together, so giving preference to PSR
|
|
|
|
|
* as it allows more power-savings by complete shutting down display,
|
|
|
|
|
* so to guarantee this, intel_drrs_compute_config() must be called
|
|
|
|
|
* after intel_psr_compute_config().
|
|
|
|
|
*/
|
|
|
|
|
if (pipe_config->has_psr)
|
|
|
|
|
return false;
|
|
|
|
|
|
2022-03-31 11:28:21 +00:00
|
|
|
|
/* FIXME missing FDI M2/N2 etc. */
|
|
|
|
|
if (pipe_config->has_pch_encoder)
|
|
|
|
|
return false;
|
|
|
|
|
|
|
|
|
|
if (!cpu_transcoder_has_drrs(i915, pipe_config->cpu_transcoder))
|
|
|
|
|
return false;
|
|
|
|
|
|
2022-03-31 11:28:20 +00:00
|
|
|
|
return downclock_mode &&
|
|
|
|
|
intel_panel_drrs_type(connector) == DRRS_TYPE_SEAMLESS;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static void
|
|
|
|
|
intel_dp_drrs_compute_config(struct intel_connector *connector,
|
|
|
|
|
struct intel_crtc_state *pipe_config,
|
2023-11-10 10:10:12 +00:00
|
|
|
|
int link_bpp_x16)
|
2022-03-31 11:28:20 +00:00
|
|
|
|
{
|
|
|
|
|
struct drm_i915_private *i915 = to_i915(connector->base.dev);
|
|
|
|
|
const struct drm_display_mode *downclock_mode =
|
|
|
|
|
intel_panel_downclock_mode(connector, &pipe_config->hw.adjusted_mode);
|
|
|
|
|
int pixel_clock;
|
|
|
|
|
|
2024-04-04 21:34:28 +00:00
|
|
|
|
/*
|
|
|
|
|
* FIXME all joined pipes share the same transcoder.
|
|
|
|
|
* Need to account for that when updating M/N live.
|
|
|
|
|
*/
|
|
|
|
|
if (has_seamless_m_n(connector) && !pipe_config->bigjoiner_pipes)
|
2023-09-01 13:04:33 +00:00
|
|
|
|
pipe_config->update_m_n = true;
|
2022-09-07 09:10:55 +00:00
|
|
|
|
|
2022-03-31 11:28:20 +00:00
|
|
|
|
if (!can_enable_drrs(connector, pipe_config, downclock_mode)) {
|
|
|
|
|
if (intel_cpu_transcoder_has_m2_n2(i915, pipe_config->cpu_transcoder))
|
|
|
|
|
intel_zero_m_n(&pipe_config->dp_m2_n2);
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
if (IS_IRONLAKE(i915) || IS_SANDYBRIDGE(i915) || IS_IVYBRIDGE(i915))
|
2022-05-10 10:42:39 +00:00
|
|
|
|
pipe_config->msa_timing_delay = connector->panel.vbt.edp.drrs_msa_timing_delay;
|
2022-03-31 11:28:20 +00:00
|
|
|
|
|
|
|
|
|
pipe_config->has_drrs = true;
|
|
|
|
|
|
|
|
|
|
pixel_clock = downclock_mode->clock;
|
|
|
|
|
if (pipe_config->splitter.enable)
|
|
|
|
|
pixel_clock /= pipe_config->splitter.link_count;
|
|
|
|
|
|
2023-11-10 10:10:12 +00:00
|
|
|
|
intel_link_compute_m_n(link_bpp_x16, pipe_config->lane_count, pixel_clock,
|
2023-10-24 01:09:07 +00:00
|
|
|
|
pipe_config->port_clock,
|
|
|
|
|
intel_dp_bw_fec_overhead(pipe_config->fec_enable),
|
|
|
|
|
&pipe_config->dp_m2_n2);
|
2022-03-31 11:28:20 +00:00
|
|
|
|
|
|
|
|
|
/* FIXME: abstract this better */
|
|
|
|
|
if (pipe_config->splitter.enable)
|
|
|
|
|
pipe_config->dp_m2_n2.data_m *= pipe_config->splitter.link_count;
|
|
|
|
|
}
|
|
|
|
|
|
2022-03-22 12:00:06 +00:00
|
|
|
|
static bool intel_dp_has_audio(struct intel_encoder *encoder,
|
2023-08-22 20:48:17 +00:00
|
|
|
|
struct intel_crtc_state *crtc_state,
|
2022-03-22 12:00:06 +00:00
|
|
|
|
const struct drm_connector_state *conn_state)
|
|
|
|
|
{
|
|
|
|
|
struct drm_i915_private *i915 = to_i915(encoder->base.dev);
|
|
|
|
|
const struct intel_digital_connector_state *intel_conn_state =
|
|
|
|
|
to_intel_digital_connector_state(conn_state);
|
2023-08-22 20:48:17 +00:00
|
|
|
|
struct intel_connector *connector =
|
|
|
|
|
to_intel_connector(conn_state->connector);
|
2022-03-22 12:00:06 +00:00
|
|
|
|
|
2023-08-22 20:48:17 +00:00
|
|
|
|
if (!intel_crtc_has_type(crtc_state, INTEL_OUTPUT_DP_MST) &&
|
|
|
|
|
!intel_dp_port_has_audio(i915, encoder->port))
|
2022-03-22 12:00:06 +00:00
|
|
|
|
return false;
|
|
|
|
|
|
|
|
|
|
if (intel_conn_state->force_audio == HDMI_AUDIO_AUTO)
|
2023-05-30 09:08:15 +00:00
|
|
|
|
return connector->base.display_info.has_audio;
|
2022-03-22 12:00:06 +00:00
|
|
|
|
else
|
|
|
|
|
return intel_conn_state->force_audio == HDMI_AUDIO_ON;
|
|
|
|
|
}
|
|
|
|
|
|
2022-03-22 12:00:12 +00:00
|
|
|
|
static int
|
|
|
|
|
intel_dp_compute_output_format(struct intel_encoder *encoder,
|
|
|
|
|
struct intel_crtc_state *crtc_state,
|
2022-03-22 12:00:13 +00:00
|
|
|
|
struct drm_connector_state *conn_state,
|
|
|
|
|
bool respect_downstream_limits)
|
2022-03-22 12:00:12 +00:00
|
|
|
|
{
|
|
|
|
|
struct drm_i915_private *i915 = to_i915(encoder->base.dev);
|
|
|
|
|
struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
|
|
|
|
|
struct intel_connector *connector = intel_dp->attached_connector;
|
|
|
|
|
const struct drm_display_info *info = &connector->base.display_info;
|
|
|
|
|
const struct drm_display_mode *adjusted_mode = &crtc_state->hw.adjusted_mode;
|
|
|
|
|
bool ycbcr_420_only;
|
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
|
|
ycbcr_420_only = drm_mode_is_420_only(info, adjusted_mode);
|
|
|
|
|
|
2023-04-27 12:56:00 +00:00
|
|
|
|
if (ycbcr_420_only && !connector->base.ycbcr_420_allowed) {
|
2022-03-22 12:00:12 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm,
|
|
|
|
|
"YCbCr 4:2:0 mode but YCbCr 4:2:0 output not possible. Falling back to RGB.\n");
|
2023-04-27 12:56:00 +00:00
|
|
|
|
crtc_state->sink_format = INTEL_OUTPUT_FORMAT_RGB;
|
|
|
|
|
} else {
|
2023-04-27 12:56:04 +00:00
|
|
|
|
crtc_state->sink_format = intel_dp_sink_format(connector, adjusted_mode);
|
2022-03-22 12:00:12 +00:00
|
|
|
|
}
|
|
|
|
|
|
2023-04-27 12:56:00 +00:00
|
|
|
|
crtc_state->output_format = intel_dp_output_format(connector, crtc_state->sink_format);
|
|
|
|
|
|
2022-03-22 12:00:13 +00:00
|
|
|
|
ret = intel_dp_compute_link_config(encoder, crtc_state, conn_state,
|
|
|
|
|
respect_downstream_limits);
|
2022-03-22 12:00:12 +00:00
|
|
|
|
if (ret) {
|
2023-04-27 12:56:00 +00:00
|
|
|
|
if (crtc_state->sink_format == INTEL_OUTPUT_FORMAT_YCBCR420 ||
|
2022-03-22 12:00:12 +00:00
|
|
|
|
!connector->base.ycbcr_420_allowed ||
|
|
|
|
|
!drm_mode_is_420_also(info, adjusted_mode))
|
|
|
|
|
return ret;
|
|
|
|
|
|
2023-04-27 12:56:00 +00:00
|
|
|
|
crtc_state->sink_format = INTEL_OUTPUT_FORMAT_YCBCR420;
|
|
|
|
|
crtc_state->output_format = intel_dp_output_format(connector,
|
|
|
|
|
crtc_state->sink_format);
|
2022-03-22 12:00:13 +00:00
|
|
|
|
ret = intel_dp_compute_link_config(encoder, crtc_state, conn_state,
|
|
|
|
|
respect_downstream_limits);
|
2022-03-22 12:00:12 +00:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
|
}
|
|
|
|
|
|
2023-08-22 20:48:18 +00:00
|
|
|
|
void
|
2022-11-21 15:07:18 +00:00
|
|
|
|
intel_dp_audio_compute_config(struct intel_encoder *encoder,
|
|
|
|
|
struct intel_crtc_state *pipe_config,
|
|
|
|
|
struct drm_connector_state *conn_state)
|
|
|
|
|
{
|
2023-08-18 11:19:48 +00:00
|
|
|
|
pipe_config->has_audio =
|
2023-08-22 20:48:17 +00:00
|
|
|
|
intel_dp_has_audio(encoder, pipe_config, conn_state) &&
|
2023-08-18 11:19:48 +00:00
|
|
|
|
intel_audio_compute_config(encoder, pipe_config, conn_state);
|
|
|
|
|
|
|
|
|
|
pipe_config->sdp_split_enable = pipe_config->has_audio &&
|
|
|
|
|
intel_dp_is_uhbr(pipe_config);
|
2022-11-21 15:07:18 +00:00
|
|
|
|
}
|
|
|
|
|
|
2019-01-15 20:08:00 +00:00
|
|
|
|
int
|
2018-04-26 08:25:26 +00:00
|
|
|
|
intel_dp_compute_config(struct intel_encoder *encoder,
|
|
|
|
|
struct intel_crtc_state *pipe_config,
|
|
|
|
|
struct drm_connector_state *conn_state)
|
|
|
|
|
{
|
|
|
|
|
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
2019-10-31 11:26:02 +00:00
|
|
|
|
struct drm_display_mode *adjusted_mode = &pipe_config->hw.adjusted_mode;
|
2019-12-04 18:05:43 +00:00
|
|
|
|
struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
|
2022-03-11 17:24:18 +00:00
|
|
|
|
const struct drm_display_mode *fixed_mode;
|
2022-03-22 12:00:09 +00:00
|
|
|
|
struct intel_connector *connector = intel_dp->attached_connector;
|
2023-11-10 10:10:12 +00:00
|
|
|
|
int ret = 0, link_bpp_x16;
|
2018-04-26 08:25:26 +00:00
|
|
|
|
|
2022-03-22 12:00:06 +00:00
|
|
|
|
if (HAS_PCH_SPLIT(dev_priv) && !HAS_DDI(dev_priv) && encoder->port != PORT_A)
|
2018-04-26 08:25:26 +00:00
|
|
|
|
pipe_config->has_pch_encoder = true;
|
|
|
|
|
|
2022-03-22 12:00:09 +00:00
|
|
|
|
fixed_mode = intel_panel_fixed_mode(connector, adjusted_mode);
|
2022-03-11 17:24:18 +00:00
|
|
|
|
if (intel_dp_is_edp(intel_dp) && fixed_mode) {
|
2022-03-22 12:00:09 +00:00
|
|
|
|
ret = intel_panel_compute_config(connector, adjusted_mode);
|
2021-09-27 18:52:07 +00:00
|
|
|
|
if (ret)
|
|
|
|
|
return ret;
|
2018-04-26 08:25:26 +00:00
|
|
|
|
}
|
|
|
|
|
|
2018-05-24 12:54:03 +00:00
|
|
|
|
if (adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN)
|
2019-01-15 20:08:00 +00:00
|
|
|
|
return -EINVAL;
|
2018-05-24 12:54:03 +00:00
|
|
|
|
|
2023-01-05 12:41:25 +00:00
|
|
|
|
if (!connector->base.interlace_allowed &&
|
2018-04-26 08:25:26 +00:00
|
|
|
|
adjusted_mode->flags & DRM_MODE_FLAG_INTERLACE)
|
2019-01-15 20:08:00 +00:00
|
|
|
|
return -EINVAL;
|
2018-04-26 08:25:26 +00:00
|
|
|
|
|
|
|
|
|
if (adjusted_mode->flags & DRM_MODE_FLAG_DBLCLK)
|
2019-01-15 20:08:00 +00:00
|
|
|
|
return -EINVAL;
|
2018-04-26 08:25:26 +00:00
|
|
|
|
|
2019-07-18 14:43:39 +00:00
|
|
|
|
if (intel_dp_hdisplay_bad(dev_priv, adjusted_mode->crtc_hdisplay))
|
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
2022-03-22 12:00:13 +00:00
|
|
|
|
/*
|
|
|
|
|
* Try to respect downstream TMDS clock limits first, if
|
|
|
|
|
* that fails assume the user might know something we don't.
|
|
|
|
|
*/
|
|
|
|
|
ret = intel_dp_compute_output_format(encoder, pipe_config, conn_state, true);
|
|
|
|
|
if (ret)
|
|
|
|
|
ret = intel_dp_compute_output_format(encoder, pipe_config, conn_state, false);
|
2022-03-22 12:00:12 +00:00
|
|
|
|
if (ret)
|
2019-01-15 20:08:00 +00:00
|
|
|
|
return ret;
|
2018-04-26 08:25:26 +00:00
|
|
|
|
|
2022-03-22 12:00:08 +00:00
|
|
|
|
if ((intel_dp_is_edp(intel_dp) && fixed_mode) ||
|
|
|
|
|
pipe_config->output_format == INTEL_OUTPUT_FORMAT_YCBCR420) {
|
|
|
|
|
ret = intel_panel_fitting(pipe_config, conn_state);
|
|
|
|
|
if (ret)
|
|
|
|
|
return ret;
|
|
|
|
|
}
|
|
|
|
|
|
2019-03-26 14:25:51 +00:00
|
|
|
|
pipe_config->limited_color_range =
|
|
|
|
|
intel_dp_limited_color_range(pipe_config, conn_state);
|
2013-01-17 14:31:29 +00:00
|
|
|
|
|
2023-05-03 11:36:59 +00:00
|
|
|
|
pipe_config->enhanced_framing =
|
|
|
|
|
drm_dp_enhanced_frame_cap(intel_dp->dpcd);
|
|
|
|
|
|
2019-10-22 13:34:13 +00:00
|
|
|
|
if (pipe_config->dsc.compression_enable)
|
2023-11-10 10:10:12 +00:00
|
|
|
|
link_bpp_x16 = pipe_config->dsc.compressed_bpp_x16;
|
2018-11-28 21:36:21 +00:00
|
|
|
|
else
|
2023-11-10 10:10:12 +00:00
|
|
|
|
link_bpp_x16 = to_bpp_x16(intel_dp_output_bpp(pipe_config->output_format,
|
|
|
|
|
pipe_config->pipe_bpp));
|
2019-03-26 14:49:03 +00:00
|
|
|
|
|
2021-03-02 11:03:02 +00:00
|
|
|
|
if (intel_dp->mso_link_count) {
|
|
|
|
|
int n = intel_dp->mso_link_count;
|
|
|
|
|
int overlap = intel_dp->mso_pixel_overlap;
|
|
|
|
|
|
|
|
|
|
pipe_config->splitter.enable = true;
|
|
|
|
|
pipe_config->splitter.link_count = n;
|
|
|
|
|
pipe_config->splitter.pixel_overlap = overlap;
|
|
|
|
|
|
|
|
|
|
drm_dbg_kms(&dev_priv->drm, "MSO link count %d, pixel overlap %d\n",
|
|
|
|
|
n, overlap);
|
|
|
|
|
|
|
|
|
|
adjusted_mode->crtc_hdisplay = adjusted_mode->crtc_hdisplay / n + overlap;
|
|
|
|
|
adjusted_mode->crtc_hblank_start = adjusted_mode->crtc_hblank_start / n + overlap;
|
|
|
|
|
adjusted_mode->crtc_hblank_end = adjusted_mode->crtc_hblank_end / n + overlap;
|
|
|
|
|
adjusted_mode->crtc_hsync_start = adjusted_mode->crtc_hsync_start / n + overlap;
|
|
|
|
|
adjusted_mode->crtc_hsync_end = adjusted_mode->crtc_hsync_end / n + overlap;
|
|
|
|
|
adjusted_mode->crtc_htotal = adjusted_mode->crtc_htotal / n + overlap;
|
|
|
|
|
adjusted_mode->crtc_clock /= n;
|
|
|
|
|
}
|
|
|
|
|
|
2022-11-21 15:07:18 +00:00
|
|
|
|
intel_dp_audio_compute_config(encoder, pipe_config, conn_state);
|
|
|
|
|
|
2023-11-10 10:10:12 +00:00
|
|
|
|
intel_link_compute_m_n(link_bpp_x16,
|
2019-03-26 14:49:03 +00:00
|
|
|
|
pipe_config->lane_count,
|
|
|
|
|
adjusted_mode->crtc_clock,
|
|
|
|
|
pipe_config->port_clock,
|
2023-10-24 01:09:07 +00:00
|
|
|
|
intel_dp_bw_fec_overhead(pipe_config->fec_enable),
|
|
|
|
|
&pipe_config->dp_m_n);
|
2013-03-18 10:25:36 +00:00
|
|
|
|
|
2021-03-02 11:03:02 +00:00
|
|
|
|
/* FIXME: abstract this better */
|
|
|
|
|
if (pipe_config->splitter.enable)
|
2022-01-27 09:32:52 +00:00
|
|
|
|
pipe_config->dp_m_n.data_m *= pipe_config->splitter.link_count;
|
2021-03-02 11:03:02 +00:00
|
|
|
|
|
2016-10-13 10:02:52 +00:00
|
|
|
|
if (!HAS_DDI(dev_priv))
|
2021-03-18 16:10:15 +00:00
|
|
|
|
g4x_dp_set_clock(encoder, pipe_config);
|
2013-04-19 09:14:33 +00:00
|
|
|
|
|
2021-01-22 23:26:35 +00:00
|
|
|
|
intel_vrr_compute_config(pipe_config, conn_state);
|
2021-09-22 21:52:42 +00:00
|
|
|
|
intel_psr_compute_config(intel_dp, pipe_config, conn_state);
|
2023-11-10 10:10:12 +00:00
|
|
|
|
intel_dp_drrs_compute_config(connector, pipe_config, link_bpp_x16);
|
2020-02-11 07:46:41 +00:00
|
|
|
|
intel_dp_compute_vsc_sdp(intel_dp, pipe_config, conn_state);
|
2020-02-11 07:46:42 +00:00
|
|
|
|
intel_dp_compute_hdr_metadata_infoframe_sdp(intel_dp, pipe_config, conn_state);
|
2017-10-12 13:02:01 +00:00
|
|
|
|
|
2019-01-15 20:08:00 +00:00
|
|
|
|
return 0;
|
2009-04-07 23:16:42 +00:00
|
|
|
|
}
|
|
|
|
|
|
2015-08-17 15:05:12 +00:00
|
|
|
|
void intel_dp_set_link_params(struct intel_dp *intel_dp,
|
2020-10-01 11:10:53 +00:00
|
|
|
|
int link_rate, int lane_count)
|
2015-08-17 15:05:12 +00:00
|
|
|
|
{
|
2021-09-30 13:43:07 +00:00
|
|
|
|
memset(intel_dp->train_set, 0, sizeof(intel_dp->train_set));
|
2018-01-17 19:21:49 +00:00
|
|
|
|
intel_dp->link_trained = false;
|
2016-09-01 22:08:06 +00:00
|
|
|
|
intel_dp->link_rate = link_rate;
|
|
|
|
|
intel_dp->lane_count = lane_count;
|
2015-08-17 15:05:12 +00:00
|
|
|
|
}
|
|
|
|
|
|
2021-10-18 09:41:51 +00:00
|
|
|
|
static void intel_dp_reset_max_link_params(struct intel_dp *intel_dp)
|
|
|
|
|
{
|
|
|
|
|
intel_dp->max_link_lane_count = intel_dp_max_common_lane_count(intel_dp);
|
|
|
|
|
intel_dp->max_link_rate = intel_dp_max_common_rate(intel_dp);
|
|
|
|
|
}
|
|
|
|
|
|
2014-08-12 14:11:39 +00:00
|
|
|
|
/* Enable backlight PWM and backlight PP control. */
|
2017-06-12 10:21:13 +00:00
|
|
|
|
void intel_edp_backlight_on(const struct intel_crtc_state *crtc_state,
|
|
|
|
|
const struct drm_connector_state *conn_state)
|
2014-08-12 14:11:39 +00:00
|
|
|
|
{
|
2019-12-04 18:05:43 +00:00
|
|
|
|
struct intel_dp *intel_dp = enc_to_intel_dp(to_intel_encoder(conn_state->best_encoder));
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
|
2017-06-12 10:21:13 +00:00
|
|
|
|
|
2017-08-18 09:30:20 +00:00
|
|
|
|
if (!intel_dp_is_edp(intel_dp))
|
2014-08-12 14:11:39 +00:00
|
|
|
|
return;
|
|
|
|
|
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm, "\n");
|
2014-08-12 14:11:39 +00:00
|
|
|
|
|
2021-08-25 11:06:51 +00:00
|
|
|
|
intel_backlight_enable(crtc_state, conn_state);
|
2021-01-08 17:44:11 +00:00
|
|
|
|
intel_pps_backlight_on(intel_dp);
|
2014-08-12 14:11:39 +00:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/* Disable backlight PP control and backlight PWM. */
|
2017-06-12 10:21:13 +00:00
|
|
|
|
void intel_edp_backlight_off(const struct drm_connector_state *old_conn_state)
|
2014-08-12 14:11:39 +00:00
|
|
|
|
{
|
2019-12-04 18:05:43 +00:00
|
|
|
|
struct intel_dp *intel_dp = enc_to_intel_dp(to_intel_encoder(old_conn_state->best_encoder));
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
|
2017-06-12 10:21:13 +00:00
|
|
|
|
|
2017-08-18 09:30:20 +00:00
|
|
|
|
if (!intel_dp_is_edp(intel_dp))
|
2014-08-12 14:11:39 +00:00
|
|
|
|
return;
|
|
|
|
|
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm, "\n");
|
2014-03-31 18:13:56 +00:00
|
|
|
|
|
2021-01-08 17:44:11 +00:00
|
|
|
|
intel_pps_backlight_off(intel_dp);
|
2021-08-25 11:06:51 +00:00
|
|
|
|
intel_backlight_disable(old_conn_state);
|
2009-07-23 17:00:32 +00:00
|
|
|
|
}
|
2009-04-07 23:16:42 +00:00
|
|
|
|
|
2017-10-27 09:45:23 +00:00
|
|
|
|
static bool downstream_hpd_needs_d0(struct intel_dp *intel_dp)
|
|
|
|
|
{
|
|
|
|
|
/*
|
|
|
|
|
* DPCD 1.2+ should support BRANCH_DEVICE_CTRL, and thus
|
|
|
|
|
* be capable of signalling downstream hpd with a long pulse.
|
|
|
|
|
* Whether or not that means D3 is safe to use is not clear,
|
|
|
|
|
* but let's assume so until proven otherwise.
|
|
|
|
|
*
|
|
|
|
|
* FIXME should really check all downstream ports...
|
|
|
|
|
*/
|
|
|
|
|
return intel_dp->dpcd[DP_DPCD_REV] == 0x11 &&
|
2019-08-29 11:48:49 +00:00
|
|
|
|
drm_dp_is_branch(intel_dp->dpcd) &&
|
2017-10-27 09:45:23 +00:00
|
|
|
|
intel_dp->downstream_ports[0] & DP_DS_PORT_HPD;
|
|
|
|
|
}
|
|
|
|
|
|
2023-10-24 10:22:18 +00:00
|
|
|
|
static int
|
|
|
|
|
write_dsc_decompression_flag(struct drm_dp_aux *aux, u8 flag, bool set)
|
2018-11-28 20:26:17 +00:00
|
|
|
|
{
|
2023-10-24 10:22:18 +00:00
|
|
|
|
int err;
|
|
|
|
|
u8 val;
|
2018-11-28 20:26:17 +00:00
|
|
|
|
|
2023-10-24 10:22:18 +00:00
|
|
|
|
err = drm_dp_dpcd_readb(aux, DP_DSC_ENABLE, &val);
|
|
|
|
|
if (err < 0)
|
|
|
|
|
return err;
|
2018-11-28 20:26:17 +00:00
|
|
|
|
|
2023-10-24 10:22:18 +00:00
|
|
|
|
if (set)
|
|
|
|
|
val |= flag;
|
|
|
|
|
else
|
|
|
|
|
val &= ~flag;
|
|
|
|
|
|
|
|
|
|
return drm_dp_dpcd_writeb(aux, DP_DSC_ENABLE, val);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static void
|
|
|
|
|
intel_dp_sink_set_dsc_decompression(struct intel_connector *connector,
|
|
|
|
|
bool enable)
|
|
|
|
|
{
|
|
|
|
|
struct drm_i915_private *i915 = to_i915(connector->base.dev);
|
|
|
|
|
|
|
|
|
|
if (write_dsc_decompression_flag(connector->dp.dsc_decompression_aux,
|
|
|
|
|
DP_DECOMPRESSION_EN, enable) < 0)
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm,
|
|
|
|
|
"Failed to %s sink decompression state\n",
|
2022-02-25 23:46:29 +00:00
|
|
|
|
str_enable_disable(enable));
|
2018-11-28 20:26:17 +00:00
|
|
|
|
}
|
|
|
|
|
|
2023-10-24 10:22:19 +00:00
|
|
|
|
static void
|
|
|
|
|
intel_dp_sink_set_dsc_passthrough(const struct intel_connector *connector,
|
|
|
|
|
bool enable)
|
|
|
|
|
{
|
|
|
|
|
struct drm_i915_private *i915 = to_i915(connector->base.dev);
|
|
|
|
|
struct drm_dp_aux *aux = connector->port ?
|
|
|
|
|
connector->port->passthrough_aux : NULL;
|
|
|
|
|
|
|
|
|
|
if (!aux)
|
|
|
|
|
return;
|
|
|
|
|
|
|
|
|
|
if (write_dsc_decompression_flag(aux,
|
|
|
|
|
DP_DSC_PASSTHROUGH_EN, enable) < 0)
|
|
|
|
|
drm_dbg_kms(&i915->drm,
|
|
|
|
|
"Failed to %s sink compression passthrough state\n",
|
|
|
|
|
str_enable_disable(enable));
|
|
|
|
|
}
|
|
|
|
|
|
2023-10-24 01:09:20 +00:00
|
|
|
|
static int intel_dp_dsc_aux_ref_count(struct intel_atomic_state *state,
|
|
|
|
|
const struct intel_connector *connector,
|
|
|
|
|
bool for_get_ref)
|
|
|
|
|
{
|
|
|
|
|
struct drm_i915_private *i915 = to_i915(state->base.dev);
|
|
|
|
|
struct drm_connector *_connector_iter;
|
|
|
|
|
struct drm_connector_state *old_conn_state;
|
|
|
|
|
struct drm_connector_state *new_conn_state;
|
|
|
|
|
int ref_count = 0;
|
|
|
|
|
int i;
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
|
* On SST the decompression AUX device won't be shared, each connector
|
|
|
|
|
* uses for this its own AUX targeting the sink device.
|
|
|
|
|
*/
|
|
|
|
|
if (!connector->mst_port)
|
|
|
|
|
return connector->dp.dsc_decompression_enabled ? 1 : 0;
|
|
|
|
|
|
|
|
|
|
for_each_oldnew_connector_in_state(&state->base, _connector_iter,
|
|
|
|
|
old_conn_state, new_conn_state, i) {
|
|
|
|
|
const struct intel_connector *
|
|
|
|
|
connector_iter = to_intel_connector(_connector_iter);
|
|
|
|
|
|
|
|
|
|
if (connector_iter->mst_port != connector->mst_port)
|
|
|
|
|
continue;
|
|
|
|
|
|
|
|
|
|
if (!connector_iter->dp.dsc_decompression_enabled)
|
|
|
|
|
continue;
|
|
|
|
|
|
|
|
|
|
drm_WARN_ON(&i915->drm,
|
|
|
|
|
(for_get_ref && !new_conn_state->crtc) ||
|
|
|
|
|
(!for_get_ref && !old_conn_state->crtc));
|
|
|
|
|
|
|
|
|
|
if (connector_iter->dp.dsc_decompression_aux ==
|
|
|
|
|
connector->dp.dsc_decompression_aux)
|
|
|
|
|
ref_count++;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
return ref_count;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static bool intel_dp_dsc_aux_get_ref(struct intel_atomic_state *state,
|
|
|
|
|
struct intel_connector *connector)
|
|
|
|
|
{
|
|
|
|
|
bool ret = intel_dp_dsc_aux_ref_count(state, connector, true) == 0;
|
|
|
|
|
|
|
|
|
|
connector->dp.dsc_decompression_enabled = true;
|
|
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static bool intel_dp_dsc_aux_put_ref(struct intel_atomic_state *state,
|
|
|
|
|
struct intel_connector *connector)
|
|
|
|
|
{
|
|
|
|
|
connector->dp.dsc_decompression_enabled = false;
|
|
|
|
|
|
|
|
|
|
return intel_dp_dsc_aux_ref_count(state, connector, false) == 0;
|
|
|
|
|
}
|
|
|
|
|
|
2023-10-24 10:22:18 +00:00
|
|
|
|
/**
|
|
|
|
|
* intel_dp_sink_enable_decompression - Enable DSC decompression in sink/last branch device
|
|
|
|
|
* @state: atomic state
|
|
|
|
|
* @connector: connector to enable the decompression for
|
|
|
|
|
* @new_crtc_state: new state for the CRTC driving @connector
|
|
|
|
|
*
|
|
|
|
|
* Enable the DSC decompression if required in the %DP_DSC_ENABLE DPCD
|
|
|
|
|
* register of the appropriate sink/branch device. On SST this is always the
|
|
|
|
|
* sink device, whereas on MST based on each device's DSC capabilities it's
|
|
|
|
|
* either the last branch device (enabling decompression in it) or both the
|
|
|
|
|
* last branch device (enabling passthrough in it) and the sink device
|
|
|
|
|
* (enabling decompression in it).
|
|
|
|
|
*/
|
|
|
|
|
void intel_dp_sink_enable_decompression(struct intel_atomic_state *state,
|
|
|
|
|
struct intel_connector *connector,
|
|
|
|
|
const struct intel_crtc_state *new_crtc_state)
|
|
|
|
|
{
|
|
|
|
|
struct drm_i915_private *i915 = to_i915(state->base.dev);
|
|
|
|
|
|
|
|
|
|
if (!new_crtc_state->dsc.compression_enable)
|
|
|
|
|
return;
|
|
|
|
|
|
|
|
|
|
if (drm_WARN_ON(&i915->drm,
|
2023-10-24 01:09:20 +00:00
|
|
|
|
!connector->dp.dsc_decompression_aux ||
|
|
|
|
|
connector->dp.dsc_decompression_enabled))
|
|
|
|
|
return;
|
|
|
|
|
|
|
|
|
|
if (!intel_dp_dsc_aux_get_ref(state, connector))
|
2023-10-24 10:22:18 +00:00
|
|
|
|
return;
|
|
|
|
|
|
2023-10-24 10:22:19 +00:00
|
|
|
|
intel_dp_sink_set_dsc_passthrough(connector, true);
|
2023-10-24 10:22:18 +00:00
|
|
|
|
intel_dp_sink_set_dsc_decompression(connector, true);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/**
|
|
|
|
|
* intel_dp_sink_disable_decompression - Disable DSC decompression in sink/last branch device
|
|
|
|
|
* @state: atomic state
|
|
|
|
|
* @connector: connector to disable the decompression for
|
|
|
|
|
* @old_crtc_state: old state for the CRTC driving @connector
|
|
|
|
|
*
|
|
|
|
|
* Disable the DSC decompression if required in the %DP_DSC_ENABLE DPCD
|
|
|
|
|
* register of the appropriate sink/branch device, corresponding to the
|
|
|
|
|
* sequence in intel_dp_sink_enable_decompression().
|
|
|
|
|
*/
|
|
|
|
|
void intel_dp_sink_disable_decompression(struct intel_atomic_state *state,
|
|
|
|
|
struct intel_connector *connector,
|
|
|
|
|
const struct intel_crtc_state *old_crtc_state)
|
|
|
|
|
{
|
|
|
|
|
struct drm_i915_private *i915 = to_i915(state->base.dev);
|
|
|
|
|
|
|
|
|
|
if (!old_crtc_state->dsc.compression_enable)
|
|
|
|
|
return;
|
|
|
|
|
|
|
|
|
|
if (drm_WARN_ON(&i915->drm,
|
2023-10-24 01:09:20 +00:00
|
|
|
|
!connector->dp.dsc_decompression_aux ||
|
|
|
|
|
!connector->dp.dsc_decompression_enabled))
|
|
|
|
|
return;
|
|
|
|
|
|
|
|
|
|
if (!intel_dp_dsc_aux_put_ref(state, connector))
|
2023-10-24 10:22:18 +00:00
|
|
|
|
return;
|
|
|
|
|
|
|
|
|
|
intel_dp_sink_set_dsc_decompression(connector, false);
|
2023-10-24 10:22:19 +00:00
|
|
|
|
intel_dp_sink_set_dsc_passthrough(connector, false);
|
2023-10-24 10:22:18 +00:00
|
|
|
|
}
|
|
|
|
|
|
2020-12-04 22:35:55 +00:00
|
|
|
|
static void
|
|
|
|
|
intel_edp_init_source_oui(struct intel_dp *intel_dp, bool careful)
|
|
|
|
|
{
|
|
|
|
|
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
|
|
|
|
|
u8 oui[] = { 0x00, 0xaa, 0x01 };
|
2023-10-12 12:24:37 +00:00
|
|
|
|
u8 buf[3] = {};
|
2020-12-04 22:35:55 +00:00
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
|
* During driver init, we want to be careful and avoid changing the source OUI if it's
|
|
|
|
|
* already set to what we want, so as to avoid clearing any state by accident
|
|
|
|
|
*/
|
|
|
|
|
if (careful) {
|
|
|
|
|
if (drm_dp_dpcd_read(&intel_dp->aux, DP_SOURCE_OUI, buf, sizeof(buf)) < 0)
|
|
|
|
|
drm_err(&i915->drm, "Failed to read source OUI\n");
|
|
|
|
|
|
|
|
|
|
if (memcmp(oui, buf, sizeof(oui)) == 0)
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
if (drm_dp_dpcd_write(&intel_dp->aux, DP_SOURCE_OUI, oui, sizeof(oui)) < 0)
|
|
|
|
|
drm_err(&i915->drm, "Failed to write source OUI\n");
|
2021-11-30 21:29:09 +00:00
|
|
|
|
|
|
|
|
|
intel_dp->last_oui_write = jiffies;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
void intel_dp_wait_source_oui(struct intel_dp *intel_dp)
|
|
|
|
|
{
|
2023-02-20 16:47:18 +00:00
|
|
|
|
struct intel_connector *connector = intel_dp->attached_connector;
|
2021-11-30 21:29:09 +00:00
|
|
|
|
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
|
|
|
|
|
|
2023-02-20 16:47:18 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm, "[CONNECTOR:%d:%s] Performing OUI wait (%u ms)\n",
|
|
|
|
|
connector->base.base.id, connector->base.name,
|
|
|
|
|
connector->panel.vbt.backlight.hdr_dpcd_refresh_timeout);
|
|
|
|
|
|
|
|
|
|
wait_remaining_ms_from_jiffies(intel_dp->last_oui_write,
|
|
|
|
|
connector->panel.vbt.backlight.hdr_dpcd_refresh_timeout);
|
2020-12-04 22:35:55 +00:00
|
|
|
|
}
|
|
|
|
|
|
2020-10-16 19:48:00 +00:00
|
|
|
|
/* If the device supports it, try to set the power state appropriately */
|
|
|
|
|
void intel_dp_set_power(struct intel_dp *intel_dp, u8 mode)
|
2011-07-07 18:11:03 +00:00
|
|
|
|
{
|
2020-10-16 19:48:00 +00:00
|
|
|
|
struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
|
|
|
|
|
struct drm_i915_private *i915 = to_i915(encoder->base.dev);
|
2011-07-07 18:11:03 +00:00
|
|
|
|
int ret, i;
|
|
|
|
|
|
|
|
|
|
/* Should have a valid DPCD by this point */
|
|
|
|
|
if (intel_dp->dpcd[DP_DPCD_REV] < 0x11)
|
|
|
|
|
return;
|
|
|
|
|
|
2020-10-16 19:48:00 +00:00
|
|
|
|
if (mode != DP_SET_POWER_D0) {
|
2017-10-27 09:45:23 +00:00
|
|
|
|
if (downstream_hpd_needs_d0(intel_dp))
|
|
|
|
|
return;
|
|
|
|
|
|
2020-10-16 19:48:00 +00:00
|
|
|
|
ret = drm_dp_dpcd_writeb(&intel_dp->aux, DP_SET_POWER, mode);
|
2011-07-07 18:11:03 +00:00
|
|
|
|
} else {
|
2016-11-21 19:15:06 +00:00
|
|
|
|
struct intel_lspcon *lspcon = dp_to_lspcon(intel_dp);
|
|
|
|
|
|
2020-10-16 19:47:59 +00:00
|
|
|
|
lspcon_resume(dp_to_dig_port(intel_dp));
|
|
|
|
|
|
2020-12-04 22:35:55 +00:00
|
|
|
|
/* Write the source OUI as early as possible */
|
|
|
|
|
if (intel_dp_is_edp(intel_dp))
|
|
|
|
|
intel_edp_init_source_oui(intel_dp, false);
|
|
|
|
|
|
2011-07-07 18:11:03 +00:00
|
|
|
|
/*
|
|
|
|
|
* When turning on, we need to retry for 1ms to give the sink
|
|
|
|
|
* time to wake up.
|
|
|
|
|
*/
|
|
|
|
|
for (i = 0; i < 3; i++) {
|
2020-10-16 19:48:00 +00:00
|
|
|
|
ret = drm_dp_dpcd_writeb(&intel_dp->aux, DP_SET_POWER, mode);
|
2011-07-07 18:11:03 +00:00
|
|
|
|
if (ret == 1)
|
|
|
|
|
break;
|
|
|
|
|
msleep(1);
|
|
|
|
|
}
|
2016-11-21 19:15:06 +00:00
|
|
|
|
|
|
|
|
|
if (ret == 1 && lspcon->active)
|
|
|
|
|
lspcon_wait_pcon_mode(lspcon);
|
2011-07-07 18:11:03 +00:00
|
|
|
|
}
|
2014-09-02 13:33:52 +00:00
|
|
|
|
|
|
|
|
|
if (ret != 1)
|
2020-10-16 19:48:00 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm, "[ENCODER:%d:%s] Set power to %s failed\n",
|
|
|
|
|
encoder->base.base.id, encoder->base.name,
|
|
|
|
|
mode == DP_SET_POWER_D0 ? "D0" : "D3");
|
2011-07-07 18:11:03 +00:00
|
|
|
|
}
|
|
|
|
|
|
2020-10-05 23:01:54 +00:00
|
|
|
|
static bool
|
|
|
|
|
intel_dp_get_dpcd(struct intel_dp *intel_dp);
|
|
|
|
|
|
|
|
|
|
/**
|
|
|
|
|
* intel_dp_sync_state - sync the encoder state during init/resume
|
|
|
|
|
* @encoder: intel encoder to sync
|
|
|
|
|
* @crtc_state: state for the CRTC connected to the encoder
|
|
|
|
|
*
|
|
|
|
|
* Sync any state stored in the encoder wrt. HW state during driver init
|
|
|
|
|
* and system resume.
|
|
|
|
|
*/
|
|
|
|
|
void intel_dp_sync_state(struct intel_encoder *encoder,
|
|
|
|
|
const struct intel_crtc_state *crtc_state)
|
|
|
|
|
{
|
|
|
|
|
struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
|
|
|
|
|
|
2021-10-18 09:41:49 +00:00
|
|
|
|
if (!crtc_state)
|
|
|
|
|
return;
|
|
|
|
|
|
2020-10-05 23:01:54 +00:00
|
|
|
|
/*
|
|
|
|
|
* Don't clobber DPCD if it's been already read out during output
|
|
|
|
|
* setup (eDP) or detect.
|
|
|
|
|
*/
|
|
|
|
|
if (intel_dp->dpcd[DP_DPCD_REV] == 0)
|
|
|
|
|
intel_dp_get_dpcd(intel_dp);
|
|
|
|
|
|
2021-10-18 09:41:51 +00:00
|
|
|
|
intel_dp_reset_max_link_params(intel_dp);
|
2020-10-05 23:01:54 +00:00
|
|
|
|
}
|
|
|
|
|
|
2020-10-05 21:53:10 +00:00
|
|
|
|
bool intel_dp_initial_fastset_check(struct intel_encoder *encoder,
|
|
|
|
|
struct intel_crtc_state *crtc_state)
|
|
|
|
|
{
|
|
|
|
|
struct drm_i915_private *i915 = to_i915(encoder->base.dev);
|
2020-10-03 00:18:44 +00:00
|
|
|
|
struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
|
2022-09-22 19:13:14 +00:00
|
|
|
|
bool fastset = true;
|
2020-10-03 00:18:44 +00:00
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
|
* If BIOS has set an unsupported or non-standard link rate for some
|
|
|
|
|
* reason force an encoder recompute and full modeset.
|
|
|
|
|
*/
|
|
|
|
|
if (intel_dp_rate_index(intel_dp->source_rates, intel_dp->num_source_rates,
|
|
|
|
|
crtc_state->port_clock) < 0) {
|
2022-09-22 19:13:14 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm, "[ENCODER:%d:%s] Forcing full modeset due to unsupported link rate\n",
|
|
|
|
|
encoder->base.base.id, encoder->base.name);
|
2020-10-03 00:18:44 +00:00
|
|
|
|
crtc_state->uapi.connectors_changed = true;
|
2022-09-22 19:13:14 +00:00
|
|
|
|
fastset = false;
|
2020-10-03 00:18:44 +00:00
|
|
|
|
}
|
2020-10-05 21:53:10 +00:00
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
|
* FIXME hack to force full modeset when DSC is being used.
|
|
|
|
|
*
|
|
|
|
|
* As long as we do not have full state readout and config comparison
|
|
|
|
|
* of crtc_state->dsc, we have no way to ensure reliable fastset.
|
|
|
|
|
* Remove once we have readout for DSC.
|
|
|
|
|
*/
|
|
|
|
|
if (crtc_state->dsc.compression_enable) {
|
2022-09-22 19:13:14 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm, "[ENCODER:%d:%s] Forcing full modeset due to DSC being enabled\n",
|
|
|
|
|
encoder->base.base.id, encoder->base.name);
|
2020-10-05 21:53:10 +00:00
|
|
|
|
crtc_state->uapi.mode_changed = true;
|
2022-09-22 19:13:14 +00:00
|
|
|
|
fastset = false;
|
2020-10-05 21:53:10 +00:00
|
|
|
|
}
|
|
|
|
|
|
2021-02-09 18:14:38 +00:00
|
|
|
|
if (CAN_PSR(intel_dp)) {
|
2022-09-22 19:13:14 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm, "[ENCODER:%d:%s] Forcing full modeset to compute PSR state\n",
|
|
|
|
|
encoder->base.base.id, encoder->base.name);
|
2020-11-02 22:10:48 +00:00
|
|
|
|
crtc_state->uapi.mode_changed = true;
|
2022-09-22 19:13:14 +00:00
|
|
|
|
fastset = false;
|
2020-11-02 22:10:48 +00:00
|
|
|
|
}
|
|
|
|
|
|
2022-09-22 19:13:14 +00:00
|
|
|
|
return fastset;
|
2020-10-05 21:53:10 +00:00
|
|
|
|
}
|
|
|
|
|
|
2020-12-18 10:37:20 +00:00
|
|
|
|
static void intel_dp_get_pcon_dsc_cap(struct intel_dp *intel_dp)
|
|
|
|
|
{
|
|
|
|
|
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
|
|
|
|
|
|
|
|
|
|
/* Clear the cached register set to avoid using stale values */
|
|
|
|
|
|
|
|
|
|
memset(intel_dp->pcon_dsc_dpcd, 0, sizeof(intel_dp->pcon_dsc_dpcd));
|
|
|
|
|
|
|
|
|
|
if (drm_dp_dpcd_read(&intel_dp->aux, DP_PCON_DSC_ENCODER,
|
|
|
|
|
intel_dp->pcon_dsc_dpcd,
|
|
|
|
|
sizeof(intel_dp->pcon_dsc_dpcd)) < 0)
|
|
|
|
|
drm_err(&i915->drm, "Failed to read DPCD register 0x%x\n",
|
|
|
|
|
DP_PCON_DSC_ENCODER);
|
|
|
|
|
|
|
|
|
|
drm_dbg_kms(&i915->drm, "PCON ENCODER DSC DPCD: %*ph\n",
|
|
|
|
|
(int)sizeof(intel_dp->pcon_dsc_dpcd), intel_dp->pcon_dsc_dpcd);
|
|
|
|
|
}
|
|
|
|
|
|
2020-12-18 10:37:17 +00:00
|
|
|
|
static int intel_dp_pcon_get_frl_mask(u8 frl_bw_mask)
|
|
|
|
|
{
|
|
|
|
|
int bw_gbps[] = {9, 18, 24, 32, 40, 48};
|
|
|
|
|
int i;
|
|
|
|
|
|
|
|
|
|
for (i = ARRAY_SIZE(bw_gbps) - 1; i >= 0; i--) {
|
|
|
|
|
if (frl_bw_mask & (1 << i))
|
|
|
|
|
return bw_gbps[i];
|
|
|
|
|
}
|
|
|
|
|
return 0;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static int intel_dp_pcon_set_frl_mask(int max_frl)
|
|
|
|
|
{
|
|
|
|
|
switch (max_frl) {
|
|
|
|
|
case 48:
|
|
|
|
|
return DP_PCON_FRL_BW_MASK_48GBPS;
|
|
|
|
|
case 40:
|
|
|
|
|
return DP_PCON_FRL_BW_MASK_40GBPS;
|
|
|
|
|
case 32:
|
|
|
|
|
return DP_PCON_FRL_BW_MASK_32GBPS;
|
|
|
|
|
case 24:
|
|
|
|
|
return DP_PCON_FRL_BW_MASK_24GBPS;
|
|
|
|
|
case 18:
|
|
|
|
|
return DP_PCON_FRL_BW_MASK_18GBPS;
|
|
|
|
|
case 9:
|
|
|
|
|
return DP_PCON_FRL_BW_MASK_9GBPS;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static int intel_dp_hdmi_sink_max_frl(struct intel_dp *intel_dp)
|
|
|
|
|
{
|
|
|
|
|
struct intel_connector *intel_connector = intel_dp->attached_connector;
|
|
|
|
|
struct drm_connector *connector = &intel_connector->base;
|
2020-12-18 10:37:22 +00:00
|
|
|
|
int max_frl_rate;
|
|
|
|
|
int max_lanes, rate_per_lane;
|
|
|
|
|
int max_dsc_lanes, dsc_rate_per_lane;
|
2020-12-18 10:37:17 +00:00
|
|
|
|
|
2020-12-18 10:37:22 +00:00
|
|
|
|
max_lanes = connector->display_info.hdmi.max_lanes;
|
|
|
|
|
rate_per_lane = connector->display_info.hdmi.max_frl_rate_per_lane;
|
|
|
|
|
max_frl_rate = max_lanes * rate_per_lane;
|
|
|
|
|
|
|
|
|
|
if (connector->display_info.hdmi.dsc_cap.v_1p2) {
|
|
|
|
|
max_dsc_lanes = connector->display_info.hdmi.dsc_cap.max_lanes;
|
|
|
|
|
dsc_rate_per_lane = connector->display_info.hdmi.dsc_cap.max_frl_rate_per_lane;
|
|
|
|
|
if (max_dsc_lanes && dsc_rate_per_lane)
|
|
|
|
|
max_frl_rate = min(max_frl_rate, max_dsc_lanes * dsc_rate_per_lane);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
return max_frl_rate;
|
2020-12-18 10:37:17 +00:00
|
|
|
|
}
|
|
|
|
|
|
2021-11-10 07:29:46 +00:00
|
|
|
|
static bool
|
|
|
|
|
intel_dp_pcon_is_frl_trained(struct intel_dp *intel_dp,
|
|
|
|
|
u8 max_frl_bw_mask, u8 *frl_trained_mask)
|
|
|
|
|
{
|
|
|
|
|
if (drm_dp_pcon_hdmi_link_active(&intel_dp->aux) &&
|
|
|
|
|
drm_dp_pcon_hdmi_link_mode(&intel_dp->aux, frl_trained_mask) == DP_PCON_HDMI_MODE_FRL &&
|
|
|
|
|
*frl_trained_mask >= max_frl_bw_mask)
|
|
|
|
|
return true;
|
|
|
|
|
|
|
|
|
|
return false;
|
|
|
|
|
}
|
|
|
|
|
|
2020-12-18 10:37:17 +00:00
|
|
|
|
static int intel_dp_pcon_start_frl_training(struct intel_dp *intel_dp)
|
|
|
|
|
{
|
|
|
|
|
#define TIMEOUT_FRL_READY_MS 500
|
|
|
|
|
#define TIMEOUT_HDMI_LINK_ACTIVE_MS 1000
|
|
|
|
|
|
|
|
|
|
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
|
|
|
|
|
int max_frl_bw, max_pcon_frl_bw, max_edid_frl_bw, ret;
|
|
|
|
|
u8 max_frl_bw_mask = 0, frl_trained_mask;
|
|
|
|
|
bool is_active;
|
|
|
|
|
|
|
|
|
|
max_pcon_frl_bw = intel_dp->dfp.pcon_max_frl_bw;
|
|
|
|
|
drm_dbg(&i915->drm, "PCON max rate = %d Gbps\n", max_pcon_frl_bw);
|
|
|
|
|
|
|
|
|
|
max_edid_frl_bw = intel_dp_hdmi_sink_max_frl(intel_dp);
|
|
|
|
|
drm_dbg(&i915->drm, "Sink max rate from EDID = %d Gbps\n", max_edid_frl_bw);
|
|
|
|
|
|
|
|
|
|
max_frl_bw = min(max_edid_frl_bw, max_pcon_frl_bw);
|
|
|
|
|
|
|
|
|
|
if (max_frl_bw <= 0)
|
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
2021-11-10 07:29:46 +00:00
|
|
|
|
max_frl_bw_mask = intel_dp_pcon_set_frl_mask(max_frl_bw);
|
|
|
|
|
drm_dbg(&i915->drm, "MAX_FRL_BW_MASK = %u\n", max_frl_bw_mask);
|
|
|
|
|
|
|
|
|
|
if (intel_dp_pcon_is_frl_trained(intel_dp, max_frl_bw_mask, &frl_trained_mask))
|
|
|
|
|
goto frl_trained;
|
|
|
|
|
|
2020-12-18 10:37:17 +00:00
|
|
|
|
ret = drm_dp_pcon_frl_prepare(&intel_dp->aux, false);
|
|
|
|
|
if (ret < 0)
|
|
|
|
|
return ret;
|
|
|
|
|
/* Wait for PCON to be FRL Ready */
|
|
|
|
|
wait_for(is_active = drm_dp_pcon_is_frl_ready(&intel_dp->aux) == true, TIMEOUT_FRL_READY_MS);
|
|
|
|
|
|
|
|
|
|
if (!is_active)
|
|
|
|
|
return -ETIMEDOUT;
|
|
|
|
|
|
2021-03-23 11:24:21 +00:00
|
|
|
|
ret = drm_dp_pcon_frl_configure_1(&intel_dp->aux, max_frl_bw,
|
|
|
|
|
DP_PCON_ENABLE_SEQUENTIAL_LINK);
|
2020-12-18 10:37:17 +00:00
|
|
|
|
if (ret < 0)
|
|
|
|
|
return ret;
|
2021-03-23 11:24:21 +00:00
|
|
|
|
ret = drm_dp_pcon_frl_configure_2(&intel_dp->aux, max_frl_bw_mask,
|
|
|
|
|
DP_PCON_FRL_LINK_TRAIN_NORMAL);
|
2020-12-18 10:37:17 +00:00
|
|
|
|
if (ret < 0)
|
|
|
|
|
return ret;
|
|
|
|
|
ret = drm_dp_pcon_frl_enable(&intel_dp->aux);
|
|
|
|
|
if (ret < 0)
|
|
|
|
|
return ret;
|
|
|
|
|
/*
|
|
|
|
|
* Wait for FRL to be completed
|
|
|
|
|
* Check if the HDMI Link is up and active.
|
|
|
|
|
*/
|
2021-11-10 07:29:46 +00:00
|
|
|
|
wait_for(is_active =
|
|
|
|
|
intel_dp_pcon_is_frl_trained(intel_dp, max_frl_bw_mask, &frl_trained_mask),
|
|
|
|
|
TIMEOUT_HDMI_LINK_ACTIVE_MS);
|
2020-12-18 10:37:17 +00:00
|
|
|
|
|
|
|
|
|
if (!is_active)
|
|
|
|
|
return -ETIMEDOUT;
|
|
|
|
|
|
2021-11-10 07:29:46 +00:00
|
|
|
|
frl_trained:
|
|
|
|
|
drm_dbg(&i915->drm, "FRL_TRAINED_MASK = %u\n", frl_trained_mask);
|
2020-12-18 10:37:17 +00:00
|
|
|
|
intel_dp->frl.trained_rate_gbps = intel_dp_pcon_get_frl_mask(frl_trained_mask);
|
|
|
|
|
intel_dp->frl.is_trained = true;
|
|
|
|
|
drm_dbg(&i915->drm, "FRL trained with : %d Gbps\n", intel_dp->frl.trained_rate_gbps);
|
|
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static bool intel_dp_is_hdmi_2_1_sink(struct intel_dp *intel_dp)
|
|
|
|
|
{
|
|
|
|
|
if (drm_dp_is_branch(intel_dp->dpcd) &&
|
2023-05-30 09:08:16 +00:00
|
|
|
|
intel_dp_has_hdmi_sink(intel_dp) &&
|
2020-12-18 10:37:17 +00:00
|
|
|
|
intel_dp_hdmi_sink_max_frl(intel_dp) > 0)
|
|
|
|
|
return true;
|
|
|
|
|
|
|
|
|
|
return false;
|
|
|
|
|
}
|
|
|
|
|
|
2021-11-10 07:29:47 +00:00
|
|
|
|
static
|
|
|
|
|
int intel_dp_pcon_set_tmds_mode(struct intel_dp *intel_dp)
|
|
|
|
|
{
|
|
|
|
|
int ret;
|
|
|
|
|
u8 buf = 0;
|
|
|
|
|
|
|
|
|
|
/* Set PCON source control mode */
|
|
|
|
|
buf |= DP_PCON_ENABLE_SOURCE_CTL_MODE;
|
|
|
|
|
|
|
|
|
|
ret = drm_dp_dpcd_writeb(&intel_dp->aux, DP_PCON_HDMI_LINK_CONFIG_1, buf);
|
|
|
|
|
if (ret < 0)
|
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
|
|
/* Set HDMI LINK ENABLE */
|
|
|
|
|
buf |= DP_PCON_ENABLE_HDMI_LINK;
|
|
|
|
|
ret = drm_dp_dpcd_writeb(&intel_dp->aux, DP_PCON_HDMI_LINK_CONFIG_1, buf);
|
|
|
|
|
if (ret < 0)
|
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
}
|
|
|
|
|
|
2020-12-18 10:37:17 +00:00
|
|
|
|
void intel_dp_check_frl_training(struct intel_dp *intel_dp)
|
|
|
|
|
{
|
|
|
|
|
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
|
|
|
|
|
|
2021-03-23 11:24:22 +00:00
|
|
|
|
/*
|
|
|
|
|
* Always go for FRL training if:
|
|
|
|
|
* -PCON supports SRC_CTL_MODE (VESA DP2.0-HDMI2.1 PCON Spec Draft-1 Sec-7)
|
|
|
|
|
* -sink is HDMI2.1
|
|
|
|
|
*/
|
2021-05-11 12:09:30 +00:00
|
|
|
|
if (!(intel_dp->downstream_ports[2] & DP_PCON_SOURCE_CTL_MODE) ||
|
2021-03-23 11:24:22 +00:00
|
|
|
|
!intel_dp_is_hdmi_2_1_sink(intel_dp) ||
|
2020-12-18 10:37:17 +00:00
|
|
|
|
intel_dp->frl.is_trained)
|
|
|
|
|
return;
|
|
|
|
|
|
|
|
|
|
if (intel_dp_pcon_start_frl_training(intel_dp) < 0) {
|
|
|
|
|
int ret, mode;
|
|
|
|
|
|
2021-02-03 11:08:03 +00:00
|
|
|
|
drm_dbg(&dev_priv->drm, "Couldn't set FRL mode, continuing with TMDS mode\n");
|
2021-11-10 07:29:47 +00:00
|
|
|
|
ret = intel_dp_pcon_set_tmds_mode(intel_dp);
|
2020-12-18 10:37:17 +00:00
|
|
|
|
mode = drm_dp_pcon_hdmi_link_mode(&intel_dp->aux, NULL);
|
|
|
|
|
|
|
|
|
|
if (ret < 0 || mode != DP_PCON_HDMI_MODE_TMDS)
|
|
|
|
|
drm_dbg(&dev_priv->drm, "Issue with PCON, cannot set TMDS mode\n");
|
|
|
|
|
} else {
|
|
|
|
|
drm_dbg(&dev_priv->drm, "FRL training Completed\n");
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2020-12-18 10:37:22 +00:00
|
|
|
|
static int
|
|
|
|
|
intel_dp_pcon_dsc_enc_slice_height(const struct intel_crtc_state *crtc_state)
|
|
|
|
|
{
|
|
|
|
|
int vactive = crtc_state->hw.adjusted_mode.vdisplay;
|
|
|
|
|
|
|
|
|
|
return intel_hdmi_dsc_get_slice_height(vactive);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static int
|
|
|
|
|
intel_dp_pcon_dsc_enc_slices(struct intel_dp *intel_dp,
|
|
|
|
|
const struct intel_crtc_state *crtc_state)
|
|
|
|
|
{
|
|
|
|
|
struct intel_connector *intel_connector = intel_dp->attached_connector;
|
|
|
|
|
struct drm_connector *connector = &intel_connector->base;
|
|
|
|
|
int hdmi_throughput = connector->display_info.hdmi.dsc_cap.clk_per_slice;
|
|
|
|
|
int hdmi_max_slices = connector->display_info.hdmi.dsc_cap.max_slices;
|
|
|
|
|
int pcon_max_slices = drm_dp_pcon_dsc_max_slices(intel_dp->pcon_dsc_dpcd);
|
|
|
|
|
int pcon_max_slice_width = drm_dp_pcon_dsc_max_slice_width(intel_dp->pcon_dsc_dpcd);
|
|
|
|
|
|
|
|
|
|
return intel_hdmi_dsc_get_num_slices(crtc_state, pcon_max_slices,
|
|
|
|
|
pcon_max_slice_width,
|
|
|
|
|
hdmi_max_slices, hdmi_throughput);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static int
|
|
|
|
|
intel_dp_pcon_dsc_enc_bpp(struct intel_dp *intel_dp,
|
|
|
|
|
const struct intel_crtc_state *crtc_state,
|
|
|
|
|
int num_slices, int slice_width)
|
|
|
|
|
{
|
|
|
|
|
struct intel_connector *intel_connector = intel_dp->attached_connector;
|
|
|
|
|
struct drm_connector *connector = &intel_connector->base;
|
|
|
|
|
int output_format = crtc_state->output_format;
|
|
|
|
|
bool hdmi_all_bpp = connector->display_info.hdmi.dsc_cap.all_bpp;
|
|
|
|
|
int pcon_fractional_bpp = drm_dp_pcon_dsc_bpp_incr(intel_dp->pcon_dsc_dpcd);
|
|
|
|
|
int hdmi_max_chunk_bytes =
|
|
|
|
|
connector->display_info.hdmi.dsc_cap.total_chunk_kbytes * 1024;
|
2021-03-18 16:10:13 +00:00
|
|
|
|
|
|
|
|
|
return intel_hdmi_dsc_get_bpp(pcon_fractional_bpp, slice_width,
|
|
|
|
|
num_slices, output_format, hdmi_all_bpp,
|
|
|
|
|
hdmi_max_chunk_bytes);
|
2010-04-08 01:43:27 +00:00
|
|
|
|
}
|
|
|
|
|
|
2021-03-18 16:10:13 +00:00
|
|
|
|
void
|
|
|
|
|
intel_dp_pcon_dsc_configure(struct intel_dp *intel_dp,
|
|
|
|
|
const struct intel_crtc_state *crtc_state)
|
2020-04-20 20:06:08 +00:00
|
|
|
|
{
|
2021-03-18 16:10:13 +00:00
|
|
|
|
u8 pps_param[6];
|
|
|
|
|
int slice_height;
|
|
|
|
|
int slice_width;
|
|
|
|
|
int num_slices;
|
|
|
|
|
int bits_per_pixel;
|
|
|
|
|
int ret;
|
|
|
|
|
struct intel_connector *intel_connector = intel_dp->attached_connector;
|
|
|
|
|
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
|
|
|
|
|
struct drm_connector *connector;
|
|
|
|
|
bool hdmi_is_dsc_1_2;
|
2020-04-20 20:06:08 +00:00
|
|
|
|
|
2021-03-18 16:10:13 +00:00
|
|
|
|
if (!intel_dp_is_hdmi_2_1_sink(intel_dp))
|
|
|
|
|
return;
|
2020-04-20 20:06:08 +00:00
|
|
|
|
|
2021-03-18 16:10:13 +00:00
|
|
|
|
if (!intel_connector)
|
|
|
|
|
return;
|
|
|
|
|
connector = &intel_connector->base;
|
|
|
|
|
hdmi_is_dsc_1_2 = connector->display_info.hdmi.dsc_cap.v_1p2;
|
2020-04-20 20:06:08 +00:00
|
|
|
|
|
2021-03-18 16:10:13 +00:00
|
|
|
|
if (!drm_dp_pcon_enc_is_dsc_1_2(intel_dp->pcon_dsc_dpcd) ||
|
|
|
|
|
!hdmi_is_dsc_1_2)
|
|
|
|
|
return;
|
2020-04-20 20:06:08 +00:00
|
|
|
|
|
2021-03-18 16:10:13 +00:00
|
|
|
|
slice_height = intel_dp_pcon_dsc_enc_slice_height(crtc_state);
|
|
|
|
|
if (!slice_height)
|
|
|
|
|
return;
|
2020-04-20 20:06:08 +00:00
|
|
|
|
|
2021-03-18 16:10:13 +00:00
|
|
|
|
num_slices = intel_dp_pcon_dsc_enc_slices(intel_dp, crtc_state);
|
|
|
|
|
if (!num_slices)
|
|
|
|
|
return;
|
2020-04-20 20:06:08 +00:00
|
|
|
|
|
2021-03-18 16:10:13 +00:00
|
|
|
|
slice_width = DIV_ROUND_UP(crtc_state->hw.adjusted_mode.hdisplay,
|
|
|
|
|
num_slices);
|
2011-11-17 00:26:07 +00:00
|
|
|
|
|
2021-03-18 16:10:13 +00:00
|
|
|
|
bits_per_pixel = intel_dp_pcon_dsc_enc_bpp(intel_dp, crtc_state,
|
|
|
|
|
num_slices, slice_width);
|
|
|
|
|
if (!bits_per_pixel)
|
|
|
|
|
return;
|
2011-11-17 00:26:07 +00:00
|
|
|
|
|
2021-03-18 16:10:13 +00:00
|
|
|
|
pps_param[0] = slice_height & 0xFF;
|
|
|
|
|
pps_param[1] = slice_height >> 8;
|
|
|
|
|
pps_param[2] = slice_width & 0xFF;
|
|
|
|
|
pps_param[3] = slice_width >> 8;
|
|
|
|
|
pps_param[4] = bits_per_pixel & 0xFF;
|
|
|
|
|
pps_param[5] = (bits_per_pixel >> 8) & 0x3;
|
2011-11-17 00:26:07 +00:00
|
|
|
|
|
2021-03-18 16:10:13 +00:00
|
|
|
|
ret = drm_dp_pcon_pps_override_param(&intel_dp->aux, pps_param);
|
|
|
|
|
if (ret < 0)
|
|
|
|
|
drm_dbg_kms(&i915->drm, "Failed to set pcon DSC\n");
|
2011-11-17 00:26:07 +00:00
|
|
|
|
}
|
|
|
|
|
|
2021-03-18 16:10:13 +00:00
|
|
|
|
void intel_dp_configure_protocol_converter(struct intel_dp *intel_dp,
|
|
|
|
|
const struct intel_crtc_state *crtc_state)
|
2012-12-06 18:51:50 +00:00
|
|
|
|
{
|
2021-03-18 16:10:13 +00:00
|
|
|
|
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
|
2023-04-27 12:56:01 +00:00
|
|
|
|
bool ycbcr444_to_420 = false;
|
|
|
|
|
bool rgb_to_ycbcr = false;
|
2021-03-18 16:10:13 +00:00
|
|
|
|
u8 tmp;
|
2020-04-20 20:06:08 +00:00
|
|
|
|
|
2021-03-18 16:10:13 +00:00
|
|
|
|
if (intel_dp->dpcd[DP_DPCD_REV] < 0x13)
|
|
|
|
|
return;
|
2012-12-06 18:51:50 +00:00
|
|
|
|
|
2021-03-18 16:10:13 +00:00
|
|
|
|
if (!drm_dp_is_branch(intel_dp->dpcd))
|
|
|
|
|
return;
|
2020-04-20 20:06:08 +00:00
|
|
|
|
|
2023-05-30 09:08:16 +00:00
|
|
|
|
tmp = intel_dp_has_hdmi_sink(intel_dp) ? DP_HDMI_DVI_OUTPUT_CONFIG : 0;
|
2009-04-07 23:16:42 +00:00
|
|
|
|
|
2021-03-18 16:10:13 +00:00
|
|
|
|
if (drm_dp_dpcd_writeb(&intel_dp->aux,
|
|
|
|
|
DP_PROTOCOL_CONVERTER_CONTROL_0, tmp) != 1)
|
2021-04-16 17:10:11 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm, "Failed to %s protocol converter HDMI mode\n",
|
2023-05-30 09:08:16 +00:00
|
|
|
|
str_enable_disable(intel_dp_has_hdmi_sink(intel_dp)));
|
2010-12-06 11:20:45 +00:00
|
|
|
|
|
2023-04-27 12:56:01 +00:00
|
|
|
|
if (crtc_state->sink_format == INTEL_OUTPUT_FORMAT_YCBCR420) {
|
|
|
|
|
switch (crtc_state->output_format) {
|
|
|
|
|
case INTEL_OUTPUT_FORMAT_YCBCR420:
|
|
|
|
|
break;
|
|
|
|
|
case INTEL_OUTPUT_FORMAT_YCBCR444:
|
|
|
|
|
ycbcr444_to_420 = true;
|
|
|
|
|
break;
|
|
|
|
|
case INTEL_OUTPUT_FORMAT_RGB:
|
|
|
|
|
rgb_to_ycbcr = true;
|
|
|
|
|
ycbcr444_to_420 = true;
|
|
|
|
|
break;
|
|
|
|
|
default:
|
|
|
|
|
MISSING_CASE(crtc_state->output_format);
|
|
|
|
|
break;
|
|
|
|
|
}
|
2023-04-27 12:56:02 +00:00
|
|
|
|
} else if (crtc_state->sink_format == INTEL_OUTPUT_FORMAT_YCBCR444) {
|
|
|
|
|
switch (crtc_state->output_format) {
|
|
|
|
|
case INTEL_OUTPUT_FORMAT_YCBCR444:
|
|
|
|
|
break;
|
|
|
|
|
case INTEL_OUTPUT_FORMAT_RGB:
|
|
|
|
|
rgb_to_ycbcr = true;
|
|
|
|
|
break;
|
|
|
|
|
default:
|
|
|
|
|
MISSING_CASE(crtc_state->output_format);
|
|
|
|
|
break;
|
|
|
|
|
}
|
2023-04-27 12:56:01 +00:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
tmp = ycbcr444_to_420 ? DP_CONVERSION_TO_YCBCR420_ENABLE : 0;
|
2009-07-23 17:00:32 +00:00
|
|
|
|
|
2021-03-18 16:10:13 +00:00
|
|
|
|
if (drm_dp_dpcd_writeb(&intel_dp->aux,
|
|
|
|
|
DP_PROTOCOL_CONVERTER_CONTROL_1, tmp) != 1)
|
|
|
|
|
drm_dbg_kms(&i915->drm,
|
2021-04-16 17:10:11 +00:00
|
|
|
|
"Failed to %s protocol converter YCbCr 4:2:0 conversion mode\n",
|
2022-02-25 23:46:29 +00:00
|
|
|
|
str_enable_disable(intel_dp->dfp.ycbcr_444_to_420));
|
2009-07-23 17:00:31 +00:00
|
|
|
|
|
2023-04-27 12:56:01 +00:00
|
|
|
|
tmp = rgb_to_ycbcr ? DP_CONVERSION_BT709_RGB_YCBCR_ENABLE : 0;
|
2010-11-18 01:32:59 +00:00
|
|
|
|
|
2021-03-18 16:10:13 +00:00
|
|
|
|
if (drm_dp_pcon_convert_rgb_to_ycbcr(&intel_dp->aux, tmp) < 0)
|
|
|
|
|
drm_dbg_kms(&i915->drm,
|
2023-04-27 12:56:01 +00:00
|
|
|
|
"Failed to %s protocol converter RGB->YCbCr conversion mode\n",
|
|
|
|
|
str_enable_disable(tmp));
|
2009-04-07 23:16:42 +00:00
|
|
|
|
}
|
|
|
|
|
|
2019-05-21 12:17:16 +00:00
|
|
|
|
bool intel_dp_get_colorimetry_status(struct intel_dp *intel_dp)
|
|
|
|
|
{
|
|
|
|
|
u8 dprx = 0;
|
|
|
|
|
|
|
|
|
|
if (drm_dp_dpcd_readb(&intel_dp->aux, DP_DPRX_FEATURE_ENUMERATION_LIST,
|
|
|
|
|
&dprx) != 1)
|
|
|
|
|
return false;
|
|
|
|
|
return dprx & DP_VSC_SDP_EXT_FOR_COLORIMETRY_SUPPORTED;
|
|
|
|
|
}
|
|
|
|
|
|
2023-10-11 17:16:04 +00:00
|
|
|
|
static void intel_dp_read_dsc_dpcd(struct drm_dp_aux *aux,
|
|
|
|
|
u8 dsc_dpcd[DP_DSC_RECEIVER_CAP_SIZE])
|
2018-10-31 00:19:19 +00:00
|
|
|
|
{
|
2023-10-11 17:16:04 +00:00
|
|
|
|
if (drm_dp_dpcd_read(aux, DP_DSC_SUPPORT, dsc_dpcd,
|
|
|
|
|
DP_DSC_RECEIVER_CAP_SIZE) < 0) {
|
|
|
|
|
drm_err(aux->drm_dev,
|
|
|
|
|
"Failed to read DPCD register 0x%x\n",
|
|
|
|
|
DP_DSC_SUPPORT);
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
drm_dbg_kms(aux->drm_dev, "DSC DPCD: %*ph\n",
|
|
|
|
|
DP_DSC_RECEIVER_CAP_SIZE,
|
|
|
|
|
dsc_dpcd);
|
|
|
|
|
}
|
|
|
|
|
|
2023-10-11 17:16:06 +00:00
|
|
|
|
void intel_dp_get_dsc_sink_cap(u8 dpcd_rev, struct intel_connector *connector)
|
2018-10-31 00:19:19 +00:00
|
|
|
|
{
|
2023-10-11 17:16:06 +00:00
|
|
|
|
struct drm_i915_private *i915 = to_i915(connector->base.dev);
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
|
2018-10-31 00:19:19 +00:00
|
|
|
|
/*
|
|
|
|
|
* Clear the cached register set to avoid using stale values
|
|
|
|
|
* for the sinks that do not support DSC.
|
|
|
|
|
*/
|
2023-10-11 17:16:05 +00:00
|
|
|
|
memset(connector->dp.dsc_dpcd, 0, sizeof(connector->dp.dsc_dpcd));
|
2018-10-31 00:19:19 +00:00
|
|
|
|
|
2018-11-02 04:14:54 +00:00
|
|
|
|
/* Clear fec_capable to avoid using stale values */
|
2023-10-11 17:16:05 +00:00
|
|
|
|
connector->dp.fec_capability = 0;
|
2018-10-31 00:19:19 +00:00
|
|
|
|
|
2023-10-11 17:16:04 +00:00
|
|
|
|
if (dpcd_rev < DP_DPCD_REV_14)
|
|
|
|
|
return;
|
2018-11-20 20:24:39 +00:00
|
|
|
|
|
2023-10-11 17:16:05 +00:00
|
|
|
|
intel_dp_read_dsc_dpcd(connector->dp.dsc_decompression_aux,
|
|
|
|
|
connector->dp.dsc_dpcd);
|
2018-11-02 04:14:54 +00:00
|
|
|
|
|
2023-10-11 17:16:05 +00:00
|
|
|
|
if (drm_dp_dpcd_readb(connector->dp.dsc_decompression_aux, DP_FEC_CAPABILITY,
|
|
|
|
|
&connector->dp.fec_capability) < 0) {
|
2023-10-11 17:16:04 +00:00
|
|
|
|
drm_err(&i915->drm, "Failed to read FEC DPCD register\n");
|
|
|
|
|
return;
|
2018-10-31 00:19:19 +00:00
|
|
|
|
}
|
2023-10-11 17:16:04 +00:00
|
|
|
|
|
|
|
|
|
drm_dbg_kms(&i915->drm, "FEC CAPABILITY: %x\n",
|
2023-10-11 17:16:05 +00:00
|
|
|
|
connector->dp.fec_capability);
|
2023-10-11 17:16:04 +00:00
|
|
|
|
}
|
|
|
|
|
|
2023-10-11 17:16:06 +00:00
|
|
|
|
static void intel_edp_get_dsc_sink_cap(u8 edp_dpcd_rev, struct intel_connector *connector)
|
2023-10-11 17:16:04 +00:00
|
|
|
|
{
|
|
|
|
|
if (edp_dpcd_rev < DP_EDP_14)
|
|
|
|
|
return;
|
|
|
|
|
|
2023-10-11 17:16:05 +00:00
|
|
|
|
intel_dp_read_dsc_dpcd(connector->dp.dsc_decompression_aux, connector->dp.dsc_dpcd);
|
2018-10-31 00:19:19 +00:00
|
|
|
|
}
|
|
|
|
|
|
2021-03-02 11:03:01 +00:00
|
|
|
|
static void intel_edp_mso_mode_fixup(struct intel_connector *connector,
|
|
|
|
|
struct drm_display_mode *mode)
|
|
|
|
|
{
|
|
|
|
|
struct intel_dp *intel_dp = intel_attached_dp(connector);
|
|
|
|
|
struct drm_i915_private *i915 = to_i915(connector->base.dev);
|
|
|
|
|
int n = intel_dp->mso_link_count;
|
|
|
|
|
int overlap = intel_dp->mso_pixel_overlap;
|
|
|
|
|
|
|
|
|
|
if (!mode || !n)
|
|
|
|
|
return;
|
|
|
|
|
|
|
|
|
|
mode->hdisplay = (mode->hdisplay - overlap) * n;
|
|
|
|
|
mode->hsync_start = (mode->hsync_start - overlap) * n;
|
|
|
|
|
mode->hsync_end = (mode->hsync_end - overlap) * n;
|
|
|
|
|
mode->htotal = (mode->htotal - overlap) * n;
|
|
|
|
|
mode->clock *= n;
|
|
|
|
|
|
|
|
|
|
drm_mode_set_name(mode);
|
|
|
|
|
|
|
|
|
|
drm_dbg_kms(&i915->drm,
|
2022-03-23 18:29:28 +00:00
|
|
|
|
"[CONNECTOR:%d:%s] using generated MSO mode: " DRM_MODE_FMT "\n",
|
|
|
|
|
connector->base.base.id, connector->base.name,
|
|
|
|
|
DRM_MODE_ARG(mode));
|
2021-03-02 11:03:01 +00:00
|
|
|
|
}
|
|
|
|
|
|
2022-05-10 10:42:29 +00:00
|
|
|
|
void intel_edp_fixup_vbt_bpp(struct intel_encoder *encoder, int pipe_bpp)
|
|
|
|
|
{
|
|
|
|
|
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
2022-05-10 10:42:39 +00:00
|
|
|
|
struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
|
|
|
|
|
struct intel_connector *connector = intel_dp->attached_connector;
|
2022-05-10 10:42:29 +00:00
|
|
|
|
|
2022-05-10 10:42:39 +00:00
|
|
|
|
if (connector->panel.vbt.edp.bpp && pipe_bpp > connector->panel.vbt.edp.bpp) {
|
2022-05-10 10:42:29 +00:00
|
|
|
|
/*
|
|
|
|
|
* This is a big fat ugly hack.
|
|
|
|
|
*
|
|
|
|
|
* Some machines in UEFI boot mode provide us a VBT that has 18
|
|
|
|
|
* bpp and 1.62 GHz link bandwidth for eDP, which for reasons
|
|
|
|
|
* unknown we fail to light up. Yet the same BIOS boots up with
|
|
|
|
|
* 24 bpp and 2.7 GHz link. Use the same bpp as the BIOS uses as
|
|
|
|
|
* max, not what it tells us to use.
|
|
|
|
|
*
|
|
|
|
|
* Note: This will still be broken if the eDP panel is not lit
|
|
|
|
|
* up by the BIOS, and thus we can't get the mode at module
|
|
|
|
|
* load.
|
|
|
|
|
*/
|
|
|
|
|
drm_dbg_kms(&dev_priv->drm,
|
|
|
|
|
"pipe has %d bpp for eDP panel, overriding BIOS-provided max %d bpp\n",
|
2022-05-10 10:42:39 +00:00
|
|
|
|
pipe_bpp, connector->panel.vbt.edp.bpp);
|
|
|
|
|
connector->panel.vbt.edp.bpp = pipe_bpp;
|
2022-05-10 10:42:29 +00:00
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2021-02-11 14:52:14 +00:00
|
|
|
|
static void intel_edp_mso_init(struct intel_dp *intel_dp)
|
|
|
|
|
{
|
|
|
|
|
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
|
2021-08-31 14:17:35 +00:00
|
|
|
|
struct intel_connector *connector = intel_dp->attached_connector;
|
|
|
|
|
struct drm_display_info *info = &connector->base.display_info;
|
2021-02-11 14:52:14 +00:00
|
|
|
|
u8 mso;
|
|
|
|
|
|
|
|
|
|
if (intel_dp->edp_dpcd[0] < DP_EDP_14)
|
|
|
|
|
return;
|
|
|
|
|
|
|
|
|
|
if (drm_dp_dpcd_readb(&intel_dp->aux, DP_EDP_MSO_LINK_CAPABILITIES, &mso) != 1) {
|
|
|
|
|
drm_err(&i915->drm, "Failed to read MSO cap\n");
|
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/* Valid configurations are SST or MSO 2x1, 2x2, 4x1 */
|
|
|
|
|
mso &= DP_EDP_MSO_NUMBER_OF_LINKS_MASK;
|
|
|
|
|
if (mso % 2 || mso > drm_dp_max_lane_count(intel_dp->dpcd)) {
|
|
|
|
|
drm_err(&i915->drm, "Invalid MSO link count cap %u\n", mso);
|
|
|
|
|
mso = 0;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
if (mso) {
|
2021-08-31 14:17:35 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm, "Sink MSO %ux%u configuration, pixel overlap %u\n",
|
|
|
|
|
mso, drm_dp_max_lane_count(intel_dp->dpcd) / mso,
|
|
|
|
|
info->mso_pixel_overlap);
|
2021-03-02 11:03:02 +00:00
|
|
|
|
if (!HAS_MSO(i915)) {
|
|
|
|
|
drm_err(&i915->drm, "No source MSO support, disabling\n");
|
|
|
|
|
mso = 0;
|
|
|
|
|
}
|
2021-02-11 14:52:14 +00:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
intel_dp->mso_link_count = mso;
|
2021-08-31 14:17:35 +00:00
|
|
|
|
intel_dp->mso_pixel_overlap = mso ? info->mso_pixel_overlap : 0;
|
2021-02-11 14:52:14 +00:00
|
|
|
|
}
|
|
|
|
|
|
2016-07-29 13:52:39 +00:00
|
|
|
|
static bool
|
2023-10-11 17:16:05 +00:00
|
|
|
|
intel_edp_init_dpcd(struct intel_dp *intel_dp, struct intel_connector *connector)
|
2016-07-29 13:52:39 +00:00
|
|
|
|
{
|
|
|
|
|
struct drm_i915_private *dev_priv =
|
|
|
|
|
to_i915(dp_to_dig_port(intel_dp)->base.base.dev);
|
2016-03-30 12:35:25 +00:00
|
|
|
|
|
2016-07-29 13:52:39 +00:00
|
|
|
|
/* this function is meant to be called only once */
|
drm/i915/display/dp: Make WARN* drm specific where drm_device ptr is available
drm specific WARN* calls include device information in the
backtrace, so we know what device the warnings originate from.
Covert all the calls of WARN* with device specific drm_WARN*
variants in functions where drm_device or drm_i915_private struct
pointer is readily available.
The conversion was done automatically with below coccinelle semantic
patch. checkpatch errors/warnings are fixed manually.
@rule1@
identifier func, T;
@@
func(...) {
...
struct drm_device *T = ...;
<...
(
-WARN(
+drm_WARN(T,
...)
|
-WARN_ON(
+drm_WARN_ON(T,
...)
|
-WARN_ONCE(
+drm_WARN_ONCE(T,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(T,
...)
)
...>
}
@rule2@
identifier func, T;
@@
func(struct drm_device *T,...) {
<...
(
-WARN(
+drm_WARN(T,
...)
|
-WARN_ON(
+drm_WARN_ON(T,
...)
|
-WARN_ONCE(
+drm_WARN_ONCE(T,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(T,
...)
)
...>
}
@rule3@
identifier func, T;
@@
func(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-WARN(
+drm_WARN(&T->drm,
...)
|
-WARN_ON(
+drm_WARN_ON(&T->drm,
...)
|
-WARN_ONCE(
+drm_WARN_ONCE(&T->drm,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(&T->drm,
...)
)
...+>
}
@rule4@
identifier func, T;
@@
func(struct drm_i915_private *T,...) {
<+...
(
-WARN(
+drm_WARN(&T->drm,
...)
|
-WARN_ON(
+drm_WARN_ON(&T->drm,
...)
|
-WARN_ONCE(
+drm_WARN_ONCE(&T->drm,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(&T->drm,
...)
)
...+>
}
Signed-off-by: Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200220165507.16823-6-pankaj.laxminarayan.bharadiya@intel.com
2020-02-20 16:55:04 +00:00
|
|
|
|
drm_WARN_ON(&dev_priv->drm, intel_dp->dpcd[DP_DPCD_REV] != 0);
|
2016-03-30 12:35:25 +00:00
|
|
|
|
|
2020-08-26 18:24:55 +00:00
|
|
|
|
if (drm_dp_read_dpcd_caps(&intel_dp->aux, intel_dp->dpcd) != 0)
|
2016-03-30 12:35:25 +00:00
|
|
|
|
return false;
|
|
|
|
|
|
2017-05-18 11:10:23 +00:00
|
|
|
|
drm_dp_read_desc(&intel_dp->aux, &intel_dp->desc,
|
|
|
|
|
drm_dp_is_branch(intel_dp->dpcd));
|
2016-10-24 16:33:29 +00:00
|
|
|
|
|
2017-10-26 14:29:31 +00:00
|
|
|
|
/*
|
|
|
|
|
* Read the eDP display control registers.
|
|
|
|
|
*
|
|
|
|
|
* Do this independent of DP_DPCD_DISPLAY_CONTROL_CAPABLE bit in
|
|
|
|
|
* DP_EDP_CONFIGURATION_CAP, because some buggy displays do not have it
|
|
|
|
|
* set, but require eDP 1.4+ detection (e.g. for supported link rates
|
|
|
|
|
* method). The display control registers should read zero if they're
|
|
|
|
|
* not supported anyway.
|
|
|
|
|
*/
|
|
|
|
|
if (drm_dp_dpcd_read(&intel_dp->aux, DP_EDP_DPCD_REV,
|
2016-10-13 08:55:08 +00:00
|
|
|
|
intel_dp->edp_dpcd, sizeof(intel_dp->edp_dpcd)) ==
|
2021-08-20 07:52:59 +00:00
|
|
|
|
sizeof(intel_dp->edp_dpcd)) {
|
drm/i915/dp: conversion to struct drm_device logging macros.
This converts various instances of printk based logging macros in
i915/display/intel_dp.c with the new struct drm_device based logging
macros using the following coccinelle script:
@rule1@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@rule2@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
New checkpatch warnings were fixed manually.
v2: fix merge conflict with new changes in file.
Signed-off-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200122110844.2022-5-wambui.karugax@gmail.com
2020-01-22 11:08:42 +00:00
|
|
|
|
drm_dbg_kms(&dev_priv->drm, "eDP DPCD: %*ph\n",
|
|
|
|
|
(int)sizeof(intel_dp->edp_dpcd),
|
|
|
|
|
intel_dp->edp_dpcd);
|
2014-01-20 17:19:39 +00:00
|
|
|
|
|
2021-08-20 07:52:59 +00:00
|
|
|
|
intel_dp->use_max_params = intel_dp->edp_dpcd[0] < DP_EDP_14;
|
|
|
|
|
}
|
|
|
|
|
|
2018-05-11 19:51:41 +00:00
|
|
|
|
/*
|
|
|
|
|
* This has to be called after intel_dp->edp_dpcd is filled, PSR checks
|
|
|
|
|
* for SET_POWER_CAPABLE bit in intel_dp->edp_dpcd[1]
|
|
|
|
|
*/
|
|
|
|
|
intel_psr_init_dpcd(intel_dp);
|
|
|
|
|
|
drm/i915/dp: Ensure sink rate values are always valid
Atm, there are no sink rate values set for DP (vs. eDP) sinks until the
DPCD capabilities are successfully read from the sink. During this time
intel_dp->num_common_rates is 0 which can lead to a
intel_dp->common_rates[-1] (*)
access, which is an undefined behaviour, in the following cases:
- In intel_dp_sync_state(), if the encoder is enabled without a sink
connected to the encoder's connector (BIOS enabled a monitor, but the
user unplugged the monitor until the driver loaded).
- In intel_dp_sync_state() if the encoder is enabled with a sink
connected, but for some reason the DPCD read has failed.
- In intel_dp_compute_link_config() if modesetting a connector without
a sink connected on it.
- In intel_dp_compute_link_config() if modesetting a connector with a
a sink connected on it, but before probing the connector first.
To avoid the (*) access in all the above cases, make sure that the sink
rate table - and hence the common rate table - is always valid, by
setting a default minimum sink rate when registering the connector
before anything could use it.
I also considered setting all the DP link rates by default, so that
modesetting with higher resolution modes also succeeds in the last two
cases above. However in case a sink is not connected that would stop
working after the first modeset, due to the LT fallback logic. So this
would need more work, beyond the scope of this fix.
As I mentioned in the previous patch, I don't think the issue this patch
fixes is user visible, however it is an undefined behaviour by
definition and triggers a BUG() in CONFIG_UBSAN builds, hence CC:stable.
v2: Clear the default sink rates, before initializing these for eDP.
Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/4297
References: https://gitlab.freedesktop.org/drm/intel/-/issues/4298
Suggested-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Acked-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20211018143417.1452632-1-imre.deak@intel.com
2021-10-18 14:34:17 +00:00
|
|
|
|
/* Clear the default sink rates */
|
|
|
|
|
intel_dp->num_sink_rates = 0;
|
|
|
|
|
|
2017-10-26 14:29:32 +00:00
|
|
|
|
/* Read the eDP 1.4+ supported link rates. */
|
|
|
|
|
if (intel_dp->edp_dpcd[0] >= DP_EDP_14) {
|
2015-03-13 17:40:31 +00:00
|
|
|
|
__le16 sink_rates[DP_MAX_SUPPORTED_RATES];
|
2015-03-12 15:10:28 +00:00
|
|
|
|
int i;
|
|
|
|
|
|
2016-04-13 14:58:33 +00:00
|
|
|
|
drm_dp_dpcd_read(&intel_dp->aux, DP_SUPPORTED_LINK_RATES,
|
|
|
|
|
sink_rates, sizeof(sink_rates));
|
2015-03-12 15:10:28 +00:00
|
|
|
|
|
2015-03-13 17:40:31 +00:00
|
|
|
|
for (i = 0; i < ARRAY_SIZE(sink_rates); i++) {
|
|
|
|
|
int val = le16_to_cpu(sink_rates[i]);
|
2015-03-12 15:10:28 +00:00
|
|
|
|
|
|
|
|
|
if (val == 0)
|
|
|
|
|
break;
|
|
|
|
|
|
drm/i915: Fix DP link rate math
We store DP link rates as link clock frequencies in kHz, just like all
other clock values. But, DP link rates in the DP Spec. are expressed in
Gbps/lane, which seems to have led to some confusion.
E.g., for HBR2
Max. data rate = 5.4 Gbps/lane x 4 lane x 8/10 x 1/8 = 2160000 kBps
where, 8/10 is for channel encoding and 1/8 is for bit to Byte conversion
Using link clock frequency, like we do
Max. data rate = 540000 kHz * 4 lanes = 2160000 kSymbols/s
Because, each symbol has 8 bit of data, this is 2160000 kBps
and there is no need to account for channel encoding here.
But, currently we do 540000 kHz * 4 lanes * (8/10) = 1728000 kBps
Similarly, while computing the required link bandwidth for a mode,
there is a mysterious 1/10 term.
This should simply be pixel_clock kHz * (bpp/8) to give the final result in
kBps
v2: Changed to DIV_ROUND_UP() and comment changes (Ville)
Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1479160220-17794-1-git-send-email-dhinakaran.pandiyan@intel.com
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
2016-11-14 21:50:20 +00:00
|
|
|
|
/* Value read multiplied by 200kHz gives the per-lane
|
|
|
|
|
* link rate in kHz. The source rates are, however,
|
|
|
|
|
* stored in terms of LS_Clk kHz. The full conversion
|
|
|
|
|
* back to symbols is
|
|
|
|
|
* (val * 200kHz)*(8/10 ch. encoding)*(1/8 bit to Byte)
|
|
|
|
|
*/
|
2015-05-07 08:29:28 +00:00
|
|
|
|
intel_dp->sink_rates[i] = (val * 200) / 10;
|
2015-03-12 15:10:28 +00:00
|
|
|
|
}
|
2015-03-13 17:40:31 +00:00
|
|
|
|
intel_dp->num_sink_rates = i;
|
2015-03-05 04:33:58 +00:00
|
|
|
|
}
|
2015-03-12 15:10:39 +00:00
|
|
|
|
|
2017-10-26 14:29:32 +00:00
|
|
|
|
/*
|
|
|
|
|
* Use DP_LINK_RATE_SET if DP_SUPPORTED_LINK_RATES are available,
|
|
|
|
|
* default to DP_MAX_LINK_RATE and DP_LINK_BW_SET otherwise.
|
|
|
|
|
*/
|
2017-03-28 14:59:05 +00:00
|
|
|
|
if (intel_dp->num_sink_rates)
|
|
|
|
|
intel_dp->use_rate_select = true;
|
|
|
|
|
else
|
|
|
|
|
intel_dp_set_sink_rates(intel_dp);
|
2021-10-18 09:41:52 +00:00
|
|
|
|
intel_dp_set_max_sink_lane_count(intel_dp);
|
2017-03-28 14:59:05 +00:00
|
|
|
|
|
2018-10-31 00:19:19 +00:00
|
|
|
|
/* Read the eDP DSC DPCD registers */
|
2022-11-10 09:33:12 +00:00
|
|
|
|
if (HAS_DSC(dev_priv))
|
2023-10-11 17:16:04 +00:00
|
|
|
|
intel_edp_get_dsc_sink_cap(intel_dp->edp_dpcd[0],
|
2023-10-11 17:16:05 +00:00
|
|
|
|
connector);
|
2018-10-31 00:19:19 +00:00
|
|
|
|
|
2020-12-04 22:35:55 +00:00
|
|
|
|
/*
|
|
|
|
|
* If needed, program our source OUI so we can make various Intel-specific AUX services
|
|
|
|
|
* available (such as HDR backlight controls)
|
|
|
|
|
*/
|
|
|
|
|
intel_edp_init_source_oui(intel_dp, true);
|
|
|
|
|
|
2016-07-29 13:52:39 +00:00
|
|
|
|
return true;
|
|
|
|
|
}
|
|
|
|
|
|
2020-08-26 18:24:51 +00:00
|
|
|
|
static bool
|
|
|
|
|
intel_dp_has_sink_count(struct intel_dp *intel_dp)
|
|
|
|
|
{
|
|
|
|
|
if (!intel_dp->attached_connector)
|
|
|
|
|
return false;
|
|
|
|
|
|
|
|
|
|
return drm_dp_read_sink_count_cap(&intel_dp->attached_connector->base,
|
|
|
|
|
intel_dp->dpcd,
|
|
|
|
|
&intel_dp->desc);
|
|
|
|
|
}
|
2016-07-29 13:52:39 +00:00
|
|
|
|
|
|
|
|
|
static bool
|
|
|
|
|
intel_dp_get_dpcd(struct intel_dp *intel_dp)
|
|
|
|
|
{
|
2020-08-26 18:24:52 +00:00
|
|
|
|
int ret;
|
|
|
|
|
|
2021-03-17 19:01:49 +00:00
|
|
|
|
if (intel_dp_init_lttpr_and_dprx_caps(intel_dp) < 0)
|
2016-07-29 13:52:39 +00:00
|
|
|
|
return false;
|
|
|
|
|
|
2019-05-28 14:06:50 +00:00
|
|
|
|
/*
|
|
|
|
|
* Don't clobber cached eDP rates. Also skip re-reading
|
|
|
|
|
* the OUI/ID since we know it won't change.
|
|
|
|
|
*/
|
2017-08-18 09:30:20 +00:00
|
|
|
|
if (!intel_dp_is_edp(intel_dp)) {
|
2019-05-28 14:06:50 +00:00
|
|
|
|
drm_dp_read_desc(&intel_dp->aux, &intel_dp->desc,
|
|
|
|
|
drm_dp_is_branch(intel_dp->dpcd));
|
|
|
|
|
|
2017-03-28 14:59:05 +00:00
|
|
|
|
intel_dp_set_sink_rates(intel_dp);
|
2021-10-18 09:41:52 +00:00
|
|
|
|
intel_dp_set_max_sink_lane_count(intel_dp);
|
2017-04-06 13:44:10 +00:00
|
|
|
|
intel_dp_set_common_rates(intel_dp);
|
|
|
|
|
}
|
2017-03-28 14:59:05 +00:00
|
|
|
|
|
2020-08-26 18:24:51 +00:00
|
|
|
|
if (intel_dp_has_sink_count(intel_dp)) {
|
2020-08-26 18:24:52 +00:00
|
|
|
|
ret = drm_dp_read_sink_count(&intel_dp->aux);
|
|
|
|
|
if (ret < 0)
|
2018-11-21 22:54:36 +00:00
|
|
|
|
return false;
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
|
* Sink count can change between short pulse hpd hence
|
|
|
|
|
* a member variable in intel_dp will track any changes
|
|
|
|
|
* between short pulse interrupts.
|
|
|
|
|
*/
|
2020-08-26 18:24:52 +00:00
|
|
|
|
intel_dp->sink_count = ret;
|
2018-11-21 22:54:36 +00:00
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
|
* SINK_COUNT == 0 and DOWNSTREAM_PORT_PRESENT == 1 implies that
|
|
|
|
|
* a dongle is present but no display. Unless we require to know
|
|
|
|
|
* if a dongle is present or not, we don't need to update
|
|
|
|
|
* downstream port information. So, an early return here saves
|
|
|
|
|
* time from performing other operations which are not required.
|
|
|
|
|
*/
|
|
|
|
|
if (!intel_dp->sink_count)
|
|
|
|
|
return false;
|
|
|
|
|
}
|
2015-03-12 15:10:39 +00:00
|
|
|
|
|
2020-08-26 18:24:49 +00:00
|
|
|
|
return drm_dp_read_downstream_info(&intel_dp->aux, intel_dp->dpcd,
|
|
|
|
|
intel_dp->downstream_ports) == 0;
|
2016-07-29 13:51:16 +00:00
|
|
|
|
}
|
|
|
|
|
|
2018-10-03 18:42:10 +00:00
|
|
|
|
static bool
|
|
|
|
|
intel_dp_can_mst(struct intel_dp *intel_dp)
|
|
|
|
|
{
|
2020-06-18 15:04:02 +00:00
|
|
|
|
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
|
|
|
|
|
|
2023-10-24 12:41:09 +00:00
|
|
|
|
return i915->display.params.enable_dp_mst &&
|
2021-10-06 10:16:18 +00:00
|
|
|
|
intel_dp_mst_source_support(intel_dp) &&
|
2020-08-26 18:24:45 +00:00
|
|
|
|
drm_dp_read_mst_cap(&intel_dp->aux, intel_dp->dpcd);
|
2018-10-03 18:42:10 +00:00
|
|
|
|
}
|
|
|
|
|
|
2016-07-29 13:51:16 +00:00
|
|
|
|
static void
|
|
|
|
|
intel_dp_configure_mst(struct intel_dp *intel_dp)
|
|
|
|
|
{
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
|
2018-10-03 18:42:10 +00:00
|
|
|
|
struct intel_encoder *encoder =
|
|
|
|
|
&dp_to_dig_port(intel_dp)->base;
|
2020-08-26 18:24:45 +00:00
|
|
|
|
bool sink_can_mst = drm_dp_read_mst_cap(&intel_dp->aux, intel_dp->dpcd);
|
2018-10-03 18:42:10 +00:00
|
|
|
|
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm,
|
|
|
|
|
"[ENCODER:%d:%s] MST support: port: %s, sink: %s, modparam: %s\n",
|
|
|
|
|
encoder->base.base.id, encoder->base.name,
|
2022-02-25 23:46:28 +00:00
|
|
|
|
str_yes_no(intel_dp_mst_source_support(intel_dp)),
|
|
|
|
|
str_yes_no(sink_can_mst),
|
2023-10-24 12:41:09 +00:00
|
|
|
|
str_yes_no(i915->display.params.enable_dp_mst));
|
2016-07-29 13:51:16 +00:00
|
|
|
|
|
2021-10-06 10:16:18 +00:00
|
|
|
|
if (!intel_dp_mst_source_support(intel_dp))
|
2016-07-29 13:51:16 +00:00
|
|
|
|
return;
|
|
|
|
|
|
2018-10-03 18:42:10 +00:00
|
|
|
|
intel_dp->is_mst = sink_can_mst &&
|
2023-10-24 12:41:09 +00:00
|
|
|
|
i915->display.params.enable_dp_mst;
|
2016-07-29 13:51:16 +00:00
|
|
|
|
|
|
|
|
|
drm_dp_mst_topology_mgr_set_mst(&intel_dp->mst_mgr,
|
|
|
|
|
intel_dp->is_mst);
|
2014-05-02 04:02:48 +00:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static bool
|
2022-01-12 11:03:17 +00:00
|
|
|
|
intel_dp_get_sink_irq_esi(struct intel_dp *intel_dp, u8 *esi)
|
2014-05-02 04:02:48 +00:00
|
|
|
|
{
|
2022-01-12 11:03:17 +00:00
|
|
|
|
return drm_dp_dpcd_read(&intel_dp->aux, DP_SINK_COUNT_ESI, esi, 4) == 4;
|
2014-05-02 04:02:48 +00:00
|
|
|
|
}
|
|
|
|
|
|
2022-01-12 11:03:14 +00:00
|
|
|
|
static bool intel_dp_ack_sink_irq_esi(struct intel_dp *intel_dp, u8 esi[4])
|
|
|
|
|
{
|
|
|
|
|
int retry;
|
|
|
|
|
|
|
|
|
|
for (retry = 0; retry < 3; retry++) {
|
|
|
|
|
if (drm_dp_dpcd_write(&intel_dp->aux, DP_SINK_COUNT_ESI + 1,
|
|
|
|
|
&esi[1], 3) == 3)
|
|
|
|
|
return true;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
return false;
|
|
|
|
|
}
|
|
|
|
|
|
2019-09-19 19:53:05 +00:00
|
|
|
|
bool
|
|
|
|
|
intel_dp_needs_vsc_sdp(const struct intel_crtc_state *crtc_state,
|
|
|
|
|
const struct drm_connector_state *conn_state)
|
|
|
|
|
{
|
|
|
|
|
/*
|
|
|
|
|
* As per DP 1.4a spec section 2.2.4.3 [MSA Field for Indication
|
|
|
|
|
* of Color Encoding Format and Content Color Gamut], in order to
|
|
|
|
|
* sending YCBCR 420 or HDR BT.2020 signals we should use DP VSC SDP.
|
|
|
|
|
*/
|
|
|
|
|
if (crtc_state->output_format == INTEL_OUTPUT_FORMAT_YCBCR420)
|
|
|
|
|
return true;
|
|
|
|
|
|
|
|
|
|
switch (conn_state->colorspace) {
|
|
|
|
|
case DRM_MODE_COLORIMETRY_SYCC_601:
|
|
|
|
|
case DRM_MODE_COLORIMETRY_OPYCC_601:
|
|
|
|
|
case DRM_MODE_COLORIMETRY_BT2020_YCC:
|
|
|
|
|
case DRM_MODE_COLORIMETRY_BT2020_RGB:
|
|
|
|
|
case DRM_MODE_COLORIMETRY_BT2020_CYCC:
|
|
|
|
|
return true;
|
|
|
|
|
default:
|
|
|
|
|
break;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
return false;
|
|
|
|
|
}
|
|
|
|
|
|
2020-02-11 07:46:43 +00:00
|
|
|
|
static ssize_t intel_dp_vsc_sdp_pack(const struct drm_dp_vsc_sdp *vsc,
|
|
|
|
|
struct dp_sdp *sdp, size_t size)
|
|
|
|
|
{
|
|
|
|
|
size_t length = sizeof(struct dp_sdp);
|
|
|
|
|
|
|
|
|
|
if (size < length)
|
|
|
|
|
return -ENOSPC;
|
|
|
|
|
|
|
|
|
|
memset(sdp, 0, size);
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
|
* Prepare VSC Header for SU as per DP 1.4a spec, Table 2-119
|
|
|
|
|
* VSC SDP Header Bytes
|
|
|
|
|
*/
|
|
|
|
|
sdp->sdp_header.HB0 = 0; /* Secondary-Data Packet ID = 0 */
|
|
|
|
|
sdp->sdp_header.HB1 = vsc->sdp_type; /* Secondary-data Packet Type */
|
|
|
|
|
sdp->sdp_header.HB2 = vsc->revision; /* Revision Number */
|
|
|
|
|
sdp->sdp_header.HB3 = vsc->length; /* Number of Valid Data Bytes */
|
|
|
|
|
|
2023-11-08 07:23:00 +00:00
|
|
|
|
if (vsc->revision == 0x6) {
|
|
|
|
|
sdp->db[0] = 1;
|
|
|
|
|
sdp->db[3] = 1;
|
|
|
|
|
}
|
|
|
|
|
|
2020-05-14 06:07:31 +00:00
|
|
|
|
/*
|
2023-11-08 07:23:00 +00:00
|
|
|
|
* Revision 0x5 and revision 0x7 supports Pixel Encoding/Colorimetry
|
|
|
|
|
* Format as per DP 1.4a spec and DP 2.0 respectively.
|
2020-05-14 06:07:31 +00:00
|
|
|
|
*/
|
2023-11-08 07:23:00 +00:00
|
|
|
|
if (!(vsc->revision == 0x5 || vsc->revision == 0x7))
|
2020-05-14 06:07:31 +00:00
|
|
|
|
goto out;
|
|
|
|
|
|
2020-02-11 07:46:43 +00:00
|
|
|
|
/* VSC SDP Payload for DB16 through DB18 */
|
|
|
|
|
/* Pixel Encoding and Colorimetry Formats */
|
|
|
|
|
sdp->db[16] = (vsc->pixelformat & 0xf) << 4; /* DB16[7:4] */
|
|
|
|
|
sdp->db[16] |= vsc->colorimetry & 0xf; /* DB16[3:0] */
|
|
|
|
|
|
|
|
|
|
switch (vsc->bpc) {
|
|
|
|
|
case 6:
|
|
|
|
|
/* 6bpc: 0x0 */
|
|
|
|
|
break;
|
|
|
|
|
case 8:
|
|
|
|
|
sdp->db[17] = 0x1; /* DB17[3:0] */
|
|
|
|
|
break;
|
|
|
|
|
case 10:
|
|
|
|
|
sdp->db[17] = 0x2;
|
|
|
|
|
break;
|
|
|
|
|
case 12:
|
|
|
|
|
sdp->db[17] = 0x3;
|
|
|
|
|
break;
|
|
|
|
|
case 16:
|
|
|
|
|
sdp->db[17] = 0x4;
|
|
|
|
|
break;
|
|
|
|
|
default:
|
|
|
|
|
MISSING_CASE(vsc->bpc);
|
|
|
|
|
break;
|
|
|
|
|
}
|
|
|
|
|
/* Dynamic Range and Component Bit Depth */
|
|
|
|
|
if (vsc->dynamic_range == DP_DYNAMIC_RANGE_CTA)
|
|
|
|
|
sdp->db[17] |= 0x80; /* DB17[7] */
|
|
|
|
|
|
|
|
|
|
/* Content Type */
|
|
|
|
|
sdp->db[18] = vsc->content_type & 0x7;
|
|
|
|
|
|
2020-05-14 06:07:31 +00:00
|
|
|
|
out:
|
2020-02-11 07:46:43 +00:00
|
|
|
|
return length;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static ssize_t
|
2022-01-21 13:00:33 +00:00
|
|
|
|
intel_dp_hdr_metadata_infoframe_sdp_pack(struct drm_i915_private *i915,
|
|
|
|
|
const struct hdmi_drm_infoframe *drm_infoframe,
|
2020-02-11 07:46:43 +00:00
|
|
|
|
struct dp_sdp *sdp,
|
|
|
|
|
size_t size)
|
|
|
|
|
{
|
|
|
|
|
size_t length = sizeof(struct dp_sdp);
|
|
|
|
|
const int infoframe_size = HDMI_INFOFRAME_HEADER_SIZE + HDMI_DRM_INFOFRAME_SIZE;
|
|
|
|
|
unsigned char buf[HDMI_INFOFRAME_HEADER_SIZE + HDMI_DRM_INFOFRAME_SIZE];
|
|
|
|
|
ssize_t len;
|
|
|
|
|
|
|
|
|
|
if (size < length)
|
|
|
|
|
return -ENOSPC;
|
|
|
|
|
|
|
|
|
|
memset(sdp, 0, size);
|
|
|
|
|
|
|
|
|
|
len = hdmi_drm_infoframe_pack_only(drm_infoframe, buf, sizeof(buf));
|
|
|
|
|
if (len < 0) {
|
2022-01-21 13:00:33 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm, "buffer size is smaller than hdr metadata infoframe\n");
|
2020-02-11 07:46:43 +00:00
|
|
|
|
return -ENOSPC;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
if (len != infoframe_size) {
|
2022-01-21 13:00:33 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm, "wrong static hdr metadata size\n");
|
2020-02-11 07:46:43 +00:00
|
|
|
|
return -ENOSPC;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
|
* Set up the infoframe sdp packet for HDR static metadata.
|
|
|
|
|
* Prepare VSC Header for SU as per DP 1.4a spec,
|
|
|
|
|
* Table 2-100 and Table 2-101
|
|
|
|
|
*/
|
|
|
|
|
|
|
|
|
|
/* Secondary-Data Packet ID, 00h for non-Audio INFOFRAME */
|
|
|
|
|
sdp->sdp_header.HB0 = 0;
|
|
|
|
|
/*
|
|
|
|
|
* Packet Type 80h + Non-audio INFOFRAME Type value
|
|
|
|
|
* HDMI_INFOFRAME_TYPE_DRM: 0x87
|
|
|
|
|
* - 80h + Non-audio INFOFRAME Type value
|
|
|
|
|
* - InfoFrame Type: 0x07
|
|
|
|
|
* [CTA-861-G Table-42 Dynamic Range and Mastering InfoFrame]
|
|
|
|
|
*/
|
|
|
|
|
sdp->sdp_header.HB1 = drm_infoframe->type;
|
|
|
|
|
/*
|
|
|
|
|
* Least Significant Eight Bits of (Data Byte Count – 1)
|
|
|
|
|
* infoframe_size - 1
|
|
|
|
|
*/
|
|
|
|
|
sdp->sdp_header.HB2 = 0x1D;
|
|
|
|
|
/* INFOFRAME SDP Version Number */
|
|
|
|
|
sdp->sdp_header.HB3 = (0x13 << 2);
|
|
|
|
|
/* CTA Header Byte 2 (INFOFRAME Version Number) */
|
|
|
|
|
sdp->db[0] = drm_infoframe->version;
|
|
|
|
|
/* CTA Header Byte 3 (Length of INFOFRAME): HDMI_DRM_INFOFRAME_SIZE */
|
|
|
|
|
sdp->db[1] = drm_infoframe->length;
|
|
|
|
|
/*
|
|
|
|
|
* Copy HDMI_DRM_INFOFRAME_SIZE size from a buffer after
|
|
|
|
|
* HDMI_INFOFRAME_HEADER_SIZE
|
|
|
|
|
*/
|
|
|
|
|
BUILD_BUG_ON(sizeof(sdp->db) < HDMI_DRM_INFOFRAME_SIZE + 2);
|
|
|
|
|
memcpy(&sdp->db[2], &buf[HDMI_INFOFRAME_HEADER_SIZE],
|
|
|
|
|
HDMI_DRM_INFOFRAME_SIZE);
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
|
* Size of DP infoframe sdp packet for HDR static metadata consists of
|
|
|
|
|
* - DP SDP Header(struct dp_sdp_header): 4 bytes
|
|
|
|
|
* - Two Data Blocks: 2 bytes
|
|
|
|
|
* CTA Header Byte2 (INFOFRAME Version Number)
|
|
|
|
|
* CTA Header Byte3 (Length of INFOFRAME)
|
|
|
|
|
* - HDMI_DRM_INFOFRAME_SIZE: 26 bytes
|
|
|
|
|
*
|
|
|
|
|
* Prior to GEN11's GMP register size is identical to DP HDR static metadata
|
|
|
|
|
* infoframe size. But GEN11+ has larger than that size, write_infoframe
|
|
|
|
|
* will pad rest of the size.
|
|
|
|
|
*/
|
|
|
|
|
return sizeof(struct dp_sdp_header) + 2 + HDMI_DRM_INFOFRAME_SIZE;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static void intel_write_dp_sdp(struct intel_encoder *encoder,
|
|
|
|
|
const struct intel_crtc_state *crtc_state,
|
|
|
|
|
unsigned int type)
|
|
|
|
|
{
|
2020-07-01 04:50:54 +00:00
|
|
|
|
struct intel_digital_port *dig_port = enc_to_dig_port(encoder);
|
2020-02-11 07:46:43 +00:00
|
|
|
|
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
|
|
|
|
struct dp_sdp sdp = {};
|
|
|
|
|
ssize_t len;
|
|
|
|
|
|
|
|
|
|
if ((crtc_state->infoframes.enable &
|
|
|
|
|
intel_hdmi_infoframe_enable(type)) == 0)
|
|
|
|
|
return;
|
|
|
|
|
|
|
|
|
|
switch (type) {
|
|
|
|
|
case DP_SDP_VSC:
|
|
|
|
|
len = intel_dp_vsc_sdp_pack(&crtc_state->infoframes.vsc, &sdp,
|
|
|
|
|
sizeof(sdp));
|
|
|
|
|
break;
|
|
|
|
|
case HDMI_PACKET_TYPE_GAMUT_METADATA:
|
2022-01-21 13:00:33 +00:00
|
|
|
|
len = intel_dp_hdr_metadata_infoframe_sdp_pack(dev_priv,
|
|
|
|
|
&crtc_state->infoframes.drm.drm,
|
2020-02-11 07:46:43 +00:00
|
|
|
|
&sdp, sizeof(sdp));
|
|
|
|
|
break;
|
|
|
|
|
default:
|
|
|
|
|
MISSING_CASE(type);
|
2020-03-25 14:07:54 +00:00
|
|
|
|
return;
|
2020-02-11 07:46:43 +00:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
if (drm_WARN_ON(&dev_priv->drm, len < 0))
|
|
|
|
|
return;
|
|
|
|
|
|
2020-07-01 04:50:54 +00:00
|
|
|
|
dig_port->write_infoframe(encoder, crtc_state, type, &sdp, len);
|
2020-02-11 07:46:43 +00:00
|
|
|
|
}
|
|
|
|
|
|
2020-05-14 06:07:31 +00:00
|
|
|
|
void intel_write_dp_vsc_sdp(struct intel_encoder *encoder,
|
|
|
|
|
const struct intel_crtc_state *crtc_state,
|
2021-09-22 21:52:42 +00:00
|
|
|
|
const struct drm_dp_vsc_sdp *vsc)
|
2020-05-14 06:07:31 +00:00
|
|
|
|
{
|
2020-07-01 04:50:54 +00:00
|
|
|
|
struct intel_digital_port *dig_port = enc_to_dig_port(encoder);
|
2020-05-14 06:07:31 +00:00
|
|
|
|
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
|
|
|
|
struct dp_sdp sdp = {};
|
|
|
|
|
ssize_t len;
|
|
|
|
|
|
|
|
|
|
len = intel_dp_vsc_sdp_pack(vsc, &sdp, sizeof(sdp));
|
|
|
|
|
|
|
|
|
|
if (drm_WARN_ON(&dev_priv->drm, len < 0))
|
|
|
|
|
return;
|
|
|
|
|
|
2020-07-01 04:50:54 +00:00
|
|
|
|
dig_port->write_infoframe(encoder, crtc_state, DP_SDP_VSC,
|
2020-05-14 06:07:31 +00:00
|
|
|
|
&sdp, len);
|
|
|
|
|
}
|
|
|
|
|
|
2020-02-11 07:46:43 +00:00
|
|
|
|
void intel_dp_set_infoframes(struct intel_encoder *encoder,
|
|
|
|
|
bool enable,
|
|
|
|
|
const struct intel_crtc_state *crtc_state,
|
|
|
|
|
const struct drm_connector_state *conn_state)
|
|
|
|
|
{
|
|
|
|
|
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
|
|
|
|
i915_reg_t reg = HSW_TVIDEO_DIP_CTL(crtc_state->cpu_transcoder);
|
|
|
|
|
u32 dip_enable = VIDEO_DIP_ENABLE_AVI_HSW | VIDEO_DIP_ENABLE_GCP_HSW |
|
|
|
|
|
VIDEO_DIP_ENABLE_VS_HSW | VIDEO_DIP_ENABLE_GMP_HSW |
|
|
|
|
|
VIDEO_DIP_ENABLE_SPD_HSW | VIDEO_DIP_ENABLE_DRM_GLK;
|
2021-04-18 00:21:24 +00:00
|
|
|
|
u32 val = intel_de_read(dev_priv, reg) & ~dip_enable;
|
2020-02-11 07:46:43 +00:00
|
|
|
|
|
2023-10-24 01:09:11 +00:00
|
|
|
|
/* TODO: Sanitize DSC enabling wrt. intel_dsc_dp_pps_write(). */
|
|
|
|
|
if (!enable && HAS_DSC(dev_priv))
|
|
|
|
|
val &= ~VDIP_ENABLE_PPS;
|
|
|
|
|
|
2020-02-11 07:46:43 +00:00
|
|
|
|
/* When PSR is enabled, this routine doesn't disable VSC DIP */
|
2021-04-18 00:21:24 +00:00
|
|
|
|
if (!crtc_state->has_psr)
|
|
|
|
|
val &= ~VIDEO_DIP_ENABLE_VSC_HSW;
|
2020-02-11 07:46:43 +00:00
|
|
|
|
|
|
|
|
|
intel_de_write(dev_priv, reg, val);
|
|
|
|
|
intel_de_posting_read(dev_priv, reg);
|
|
|
|
|
|
2021-04-18 00:21:24 +00:00
|
|
|
|
if (!enable)
|
|
|
|
|
return;
|
|
|
|
|
|
2020-02-11 07:46:43 +00:00
|
|
|
|
/* When PSR is enabled, VSC SDP is handled by PSR routine */
|
2021-04-18 00:21:23 +00:00
|
|
|
|
if (!crtc_state->has_psr)
|
2020-02-11 07:46:43 +00:00
|
|
|
|
intel_write_dp_sdp(encoder, crtc_state, DP_SDP_VSC);
|
|
|
|
|
|
|
|
|
|
intel_write_dp_sdp(encoder, crtc_state, HDMI_PACKET_TYPE_GAMUT_METADATA);
|
|
|
|
|
}
|
|
|
|
|
|
2020-05-14 06:07:20 +00:00
|
|
|
|
static int intel_dp_vsc_sdp_unpack(struct drm_dp_vsc_sdp *vsc,
|
|
|
|
|
const void *buffer, size_t size)
|
|
|
|
|
{
|
|
|
|
|
const struct dp_sdp *sdp = buffer;
|
|
|
|
|
|
|
|
|
|
if (size < sizeof(struct dp_sdp))
|
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
2021-06-17 21:33:01 +00:00
|
|
|
|
memset(vsc, 0, sizeof(*vsc));
|
2020-05-14 06:07:20 +00:00
|
|
|
|
|
|
|
|
|
if (sdp->sdp_header.HB0 != 0)
|
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
|
|
if (sdp->sdp_header.HB1 != DP_SDP_VSC)
|
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
|
|
vsc->sdp_type = sdp->sdp_header.HB1;
|
|
|
|
|
vsc->revision = sdp->sdp_header.HB2;
|
|
|
|
|
vsc->length = sdp->sdp_header.HB3;
|
|
|
|
|
|
|
|
|
|
if ((sdp->sdp_header.HB2 == 0x2 && sdp->sdp_header.HB3 == 0x8) ||
|
|
|
|
|
(sdp->sdp_header.HB2 == 0x4 && sdp->sdp_header.HB3 == 0xe)) {
|
|
|
|
|
/*
|
|
|
|
|
* - HB2 = 0x2, HB3 = 0x8
|
|
|
|
|
* VSC SDP supporting 3D stereo + PSR
|
|
|
|
|
* - HB2 = 0x4, HB3 = 0xe
|
|
|
|
|
* VSC SDP supporting 3D stereo + PSR2 with Y-coordinate of
|
|
|
|
|
* first scan line of the SU region (applies to eDP v1.4b
|
|
|
|
|
* and higher).
|
|
|
|
|
*/
|
|
|
|
|
return 0;
|
|
|
|
|
} else if (sdp->sdp_header.HB2 == 0x5 && sdp->sdp_header.HB3 == 0x13) {
|
|
|
|
|
/*
|
|
|
|
|
* - HB2 = 0x5, HB3 = 0x13
|
|
|
|
|
* VSC SDP supporting 3D stereo + PSR2 + Pixel Encoding/Colorimetry
|
|
|
|
|
* Format.
|
|
|
|
|
*/
|
|
|
|
|
vsc->pixelformat = (sdp->db[16] >> 4) & 0xf;
|
|
|
|
|
vsc->colorimetry = sdp->db[16] & 0xf;
|
|
|
|
|
vsc->dynamic_range = (sdp->db[17] >> 7) & 0x1;
|
|
|
|
|
|
|
|
|
|
switch (sdp->db[17] & 0x7) {
|
|
|
|
|
case 0x0:
|
|
|
|
|
vsc->bpc = 6;
|
|
|
|
|
break;
|
|
|
|
|
case 0x1:
|
|
|
|
|
vsc->bpc = 8;
|
|
|
|
|
break;
|
|
|
|
|
case 0x2:
|
|
|
|
|
vsc->bpc = 10;
|
|
|
|
|
break;
|
|
|
|
|
case 0x3:
|
|
|
|
|
vsc->bpc = 12;
|
|
|
|
|
break;
|
|
|
|
|
case 0x4:
|
|
|
|
|
vsc->bpc = 16;
|
|
|
|
|
break;
|
|
|
|
|
default:
|
|
|
|
|
MISSING_CASE(sdp->db[17] & 0x7);
|
|
|
|
|
return -EINVAL;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
vsc->content_type = sdp->db[18] & 0x7;
|
|
|
|
|
} else {
|
|
|
|
|
return -EINVAL;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static int
|
|
|
|
|
intel_dp_hdr_metadata_infoframe_sdp_unpack(struct hdmi_drm_infoframe *drm_infoframe,
|
|
|
|
|
const void *buffer, size_t size)
|
|
|
|
|
{
|
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
|
|
const struct dp_sdp *sdp = buffer;
|
|
|
|
|
|
|
|
|
|
if (size < sizeof(struct dp_sdp))
|
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
|
|
if (sdp->sdp_header.HB0 != 0)
|
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
|
|
if (sdp->sdp_header.HB1 != HDMI_INFOFRAME_TYPE_DRM)
|
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
|
* Least Significant Eight Bits of (Data Byte Count – 1)
|
|
|
|
|
* 1Dh (i.e., Data Byte Count = 30 bytes).
|
|
|
|
|
*/
|
|
|
|
|
if (sdp->sdp_header.HB2 != 0x1D)
|
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
|
|
/* Most Significant Two Bits of (Data Byte Count – 1), Clear to 00b. */
|
|
|
|
|
if ((sdp->sdp_header.HB3 & 0x3) != 0)
|
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
|
|
/* INFOFRAME SDP Version Number */
|
|
|
|
|
if (((sdp->sdp_header.HB3 >> 2) & 0x3f) != 0x13)
|
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
|
|
/* CTA Header Byte 2 (INFOFRAME Version Number) */
|
|
|
|
|
if (sdp->db[0] != 1)
|
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
|
|
/* CTA Header Byte 3 (Length of INFOFRAME): HDMI_DRM_INFOFRAME_SIZE */
|
|
|
|
|
if (sdp->db[1] != HDMI_DRM_INFOFRAME_SIZE)
|
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
|
|
ret = hdmi_drm_infoframe_unpack_only(drm_infoframe, &sdp->db[2],
|
|
|
|
|
HDMI_DRM_INFOFRAME_SIZE);
|
|
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static void intel_read_dp_vsc_sdp(struct intel_encoder *encoder,
|
|
|
|
|
struct intel_crtc_state *crtc_state,
|
|
|
|
|
struct drm_dp_vsc_sdp *vsc)
|
|
|
|
|
{
|
2020-07-01 04:50:54 +00:00
|
|
|
|
struct intel_digital_port *dig_port = enc_to_dig_port(encoder);
|
2020-05-14 06:07:20 +00:00
|
|
|
|
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
|
|
|
|
unsigned int type = DP_SDP_VSC;
|
|
|
|
|
struct dp_sdp sdp = {};
|
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
|
|
/* When PSR is enabled, VSC SDP is handled by PSR routine */
|
2021-04-18 00:21:23 +00:00
|
|
|
|
if (crtc_state->has_psr)
|
2020-05-14 06:07:20 +00:00
|
|
|
|
return;
|
|
|
|
|
|
|
|
|
|
if ((crtc_state->infoframes.enable &
|
|
|
|
|
intel_hdmi_infoframe_enable(type)) == 0)
|
|
|
|
|
return;
|
|
|
|
|
|
2020-07-01 04:50:54 +00:00
|
|
|
|
dig_port->read_infoframe(encoder, crtc_state, type, &sdp, sizeof(sdp));
|
2020-05-14 06:07:20 +00:00
|
|
|
|
|
|
|
|
|
ret = intel_dp_vsc_sdp_unpack(vsc, &sdp, sizeof(sdp));
|
|
|
|
|
|
|
|
|
|
if (ret)
|
|
|
|
|
drm_dbg_kms(&dev_priv->drm, "Failed to unpack DP VSC SDP\n");
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static void intel_read_dp_hdr_metadata_infoframe_sdp(struct intel_encoder *encoder,
|
|
|
|
|
struct intel_crtc_state *crtc_state,
|
|
|
|
|
struct hdmi_drm_infoframe *drm_infoframe)
|
|
|
|
|
{
|
2020-07-01 04:50:54 +00:00
|
|
|
|
struct intel_digital_port *dig_port = enc_to_dig_port(encoder);
|
2020-05-14 06:07:20 +00:00
|
|
|
|
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
|
|
|
|
unsigned int type = HDMI_PACKET_TYPE_GAMUT_METADATA;
|
|
|
|
|
struct dp_sdp sdp = {};
|
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
|
|
if ((crtc_state->infoframes.enable &
|
|
|
|
|
intel_hdmi_infoframe_enable(type)) == 0)
|
|
|
|
|
return;
|
|
|
|
|
|
2020-07-01 04:50:54 +00:00
|
|
|
|
dig_port->read_infoframe(encoder, crtc_state, type, &sdp,
|
|
|
|
|
sizeof(sdp));
|
2020-05-14 06:07:20 +00:00
|
|
|
|
|
|
|
|
|
ret = intel_dp_hdr_metadata_infoframe_sdp_unpack(drm_infoframe, &sdp,
|
|
|
|
|
sizeof(sdp));
|
|
|
|
|
|
|
|
|
|
if (ret)
|
|
|
|
|
drm_dbg_kms(&dev_priv->drm,
|
|
|
|
|
"Failed to unpack DP HDR Metadata Infoframe SDP\n");
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
void intel_read_dp_sdp(struct intel_encoder *encoder,
|
|
|
|
|
struct intel_crtc_state *crtc_state,
|
|
|
|
|
unsigned int type)
|
|
|
|
|
{
|
|
|
|
|
switch (type) {
|
|
|
|
|
case DP_SDP_VSC:
|
|
|
|
|
intel_read_dp_vsc_sdp(encoder, crtc_state,
|
|
|
|
|
&crtc_state->infoframes.vsc);
|
|
|
|
|
break;
|
|
|
|
|
case HDMI_PACKET_TYPE_GAMUT_METADATA:
|
|
|
|
|
intel_read_dp_hdr_metadata_infoframe_sdp(encoder, crtc_state,
|
|
|
|
|
&crtc_state->infoframes.drm.drm);
|
|
|
|
|
break;
|
|
|
|
|
default:
|
|
|
|
|
MISSING_CASE(type);
|
|
|
|
|
break;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2019-01-16 09:15:27 +00:00
|
|
|
|
static u8 intel_dp_autotest_link_training(struct intel_dp *intel_dp)
|
2015-04-15 15:38:38 +00:00
|
|
|
|
{
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
|
2017-01-24 16:16:34 +00:00
|
|
|
|
int status = 0;
|
2017-06-08 20:41:03 +00:00
|
|
|
|
int test_link_rate;
|
2019-01-16 09:15:27 +00:00
|
|
|
|
u8 test_lane_count, test_link_bw;
|
2017-01-24 16:16:34 +00:00
|
|
|
|
/* (DP CTS 1.2)
|
|
|
|
|
* 4.3.1.11
|
|
|
|
|
*/
|
|
|
|
|
/* Read the TEST_LANE_COUNT and TEST_LINK_RTAE fields (DP CTS 3.1.4) */
|
|
|
|
|
status = drm_dp_dpcd_readb(&intel_dp->aux, DP_TEST_LANE_COUNT,
|
|
|
|
|
&test_lane_count);
|
|
|
|
|
|
|
|
|
|
if (status <= 0) {
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm, "Lane count read failed\n");
|
2017-01-24 16:16:34 +00:00
|
|
|
|
return DP_TEST_NAK;
|
|
|
|
|
}
|
|
|
|
|
test_lane_count &= DP_MAX_LANE_COUNT_MASK;
|
|
|
|
|
|
|
|
|
|
status = drm_dp_dpcd_readb(&intel_dp->aux, DP_TEST_LINK_RATE,
|
|
|
|
|
&test_link_bw);
|
|
|
|
|
if (status <= 0) {
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm, "Link Rate read failed\n");
|
2017-01-24 16:16:34 +00:00
|
|
|
|
return DP_TEST_NAK;
|
|
|
|
|
}
|
|
|
|
|
test_link_rate = drm_dp_bw_code_to_link_rate(test_link_bw);
|
2017-06-08 20:41:03 +00:00
|
|
|
|
|
|
|
|
|
/* Validate the requested link rate and lane count */
|
|
|
|
|
if (!intel_dp_link_params_valid(intel_dp, test_link_rate,
|
|
|
|
|
test_lane_count))
|
2017-01-24 16:16:34 +00:00
|
|
|
|
return DP_TEST_NAK;
|
|
|
|
|
|
|
|
|
|
intel_dp->compliance.test_lane_count = test_lane_count;
|
|
|
|
|
intel_dp->compliance.test_link_rate = test_link_rate;
|
|
|
|
|
|
|
|
|
|
return DP_TEST_ACK;
|
2015-04-15 15:38:38 +00:00
|
|
|
|
}
|
|
|
|
|
|
2019-01-16 09:15:27 +00:00
|
|
|
|
static u8 intel_dp_autotest_video_pattern(struct intel_dp *intel_dp)
|
2015-04-15 15:38:38 +00:00
|
|
|
|
{
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
|
2019-01-16 09:15:27 +00:00
|
|
|
|
u8 test_pattern;
|
|
|
|
|
u8 test_misc;
|
2017-01-24 16:21:49 +00:00
|
|
|
|
__be16 h_width, v_height;
|
|
|
|
|
int status = 0;
|
|
|
|
|
|
|
|
|
|
/* Read the TEST_PATTERN (DP CTS 3.1.5) */
|
2017-04-06 13:44:16 +00:00
|
|
|
|
status = drm_dp_dpcd_readb(&intel_dp->aux, DP_TEST_PATTERN,
|
|
|
|
|
&test_pattern);
|
2017-01-24 16:21:49 +00:00
|
|
|
|
if (status <= 0) {
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm, "Test pattern read failed\n");
|
2017-01-24 16:21:49 +00:00
|
|
|
|
return DP_TEST_NAK;
|
|
|
|
|
}
|
|
|
|
|
if (test_pattern != DP_COLOR_RAMP)
|
|
|
|
|
return DP_TEST_NAK;
|
|
|
|
|
|
|
|
|
|
status = drm_dp_dpcd_read(&intel_dp->aux, DP_TEST_H_WIDTH_HI,
|
|
|
|
|
&h_width, 2);
|
|
|
|
|
if (status <= 0) {
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm, "H Width read failed\n");
|
2017-01-24 16:21:49 +00:00
|
|
|
|
return DP_TEST_NAK;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
status = drm_dp_dpcd_read(&intel_dp->aux, DP_TEST_V_HEIGHT_HI,
|
|
|
|
|
&v_height, 2);
|
|
|
|
|
if (status <= 0) {
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm, "V Height read failed\n");
|
2017-01-24 16:21:49 +00:00
|
|
|
|
return DP_TEST_NAK;
|
|
|
|
|
}
|
|
|
|
|
|
2017-04-06 13:44:16 +00:00
|
|
|
|
status = drm_dp_dpcd_readb(&intel_dp->aux, DP_TEST_MISC0,
|
|
|
|
|
&test_misc);
|
2017-01-24 16:21:49 +00:00
|
|
|
|
if (status <= 0) {
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm, "TEST MISC read failed\n");
|
2017-01-24 16:21:49 +00:00
|
|
|
|
return DP_TEST_NAK;
|
|
|
|
|
}
|
|
|
|
|
if ((test_misc & DP_TEST_COLOR_FORMAT_MASK) != DP_COLOR_FORMAT_RGB)
|
|
|
|
|
return DP_TEST_NAK;
|
|
|
|
|
if (test_misc & DP_TEST_DYNAMIC_RANGE_CEA)
|
|
|
|
|
return DP_TEST_NAK;
|
|
|
|
|
switch (test_misc & DP_TEST_BIT_DEPTH_MASK) {
|
|
|
|
|
case DP_TEST_BIT_DEPTH_6:
|
|
|
|
|
intel_dp->compliance.test_data.bpc = 6;
|
|
|
|
|
break;
|
|
|
|
|
case DP_TEST_BIT_DEPTH_8:
|
|
|
|
|
intel_dp->compliance.test_data.bpc = 8;
|
|
|
|
|
break;
|
|
|
|
|
default:
|
|
|
|
|
return DP_TEST_NAK;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
intel_dp->compliance.test_data.video_pattern = test_pattern;
|
|
|
|
|
intel_dp->compliance.test_data.hdisplay = be16_to_cpu(h_width);
|
|
|
|
|
intel_dp->compliance.test_data.vdisplay = be16_to_cpu(v_height);
|
|
|
|
|
/* Set test active flag here so userspace doesn't interrupt things */
|
2020-01-03 01:12:38 +00:00
|
|
|
|
intel_dp->compliance.test_active = true;
|
2017-01-24 16:21:49 +00:00
|
|
|
|
|
|
|
|
|
return DP_TEST_ACK;
|
2015-04-15 15:38:38 +00:00
|
|
|
|
}
|
|
|
|
|
|
2019-01-16 09:15:27 +00:00
|
|
|
|
static u8 intel_dp_autotest_edid(struct intel_dp *intel_dp)
|
2011-10-20 22:09:17 +00:00
|
|
|
|
{
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
|
2019-01-16 09:15:27 +00:00
|
|
|
|
u8 test_result = DP_TEST_ACK;
|
2015-05-04 14:48:20 +00:00
|
|
|
|
struct intel_connector *intel_connector = intel_dp->attached_connector;
|
|
|
|
|
struct drm_connector *connector = &intel_connector->base;
|
|
|
|
|
|
|
|
|
|
if (intel_connector->detect_edid == NULL ||
|
2015-05-08 14:15:41 +00:00
|
|
|
|
connector->edid_corrupt ||
|
2015-05-04 14:48:20 +00:00
|
|
|
|
intel_dp->aux.i2c_defer_count > 6) {
|
|
|
|
|
/* Check EDID read for NACKs, DEFERs and corruption
|
|
|
|
|
* (DP CTS 1.2 Core r1.1)
|
|
|
|
|
* 4.2.2.4 : Failed EDID read, I2C_NAK
|
|
|
|
|
* 4.2.2.5 : Failed EDID read, I2C_DEFER
|
|
|
|
|
* 4.2.2.6 : EDID corruption detected
|
|
|
|
|
* Use failsafe mode for all cases
|
|
|
|
|
*/
|
|
|
|
|
if (intel_dp->aux.i2c_nack_count > 0 ||
|
|
|
|
|
intel_dp->aux.i2c_defer_count > 0)
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm,
|
|
|
|
|
"EDID read had %d NACKs, %d DEFERs\n",
|
|
|
|
|
intel_dp->aux.i2c_nack_count,
|
|
|
|
|
intel_dp->aux.i2c_defer_count);
|
2016-12-10 00:22:50 +00:00
|
|
|
|
intel_dp->compliance.test_data.edid = INTEL_DP_RESOLUTION_FAILSAFE;
|
2015-05-04 14:48:20 +00:00
|
|
|
|
} else {
|
2023-01-25 11:10:49 +00:00
|
|
|
|
/* FIXME: Get rid of drm_edid_raw() */
|
|
|
|
|
const struct edid *block = drm_edid_raw(intel_connector->detect_edid);
|
2015-08-07 09:44:30 +00:00
|
|
|
|
|
2023-01-25 11:10:49 +00:00
|
|
|
|
/* We have to write the checksum of the last block read */
|
|
|
|
|
block += block->extensions;
|
2015-08-07 09:44:30 +00:00
|
|
|
|
|
2017-04-06 13:44:16 +00:00
|
|
|
|
if (drm_dp_dpcd_writeb(&intel_dp->aux, DP_TEST_EDID_CHECKSUM,
|
|
|
|
|
block->checksum) <= 0)
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm,
|
|
|
|
|
"Failed to write EDID checksum\n");
|
2015-05-04 14:48:20 +00:00
|
|
|
|
|
|
|
|
|
test_result = DP_TEST_ACK | DP_TEST_EDID_CHECKSUM_WRITE;
|
2017-01-21 03:09:28 +00:00
|
|
|
|
intel_dp->compliance.test_data.edid = INTEL_DP_RESOLUTION_PREFERRED;
|
2015-05-04 14:48:20 +00:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/* Set test active flag here so userspace doesn't interrupt things */
|
2020-01-03 01:12:38 +00:00
|
|
|
|
intel_dp->compliance.test_active = true;
|
2015-05-04 14:48:20 +00:00
|
|
|
|
|
2015-04-15 15:38:38 +00:00
|
|
|
|
return test_result;
|
|
|
|
|
}
|
|
|
|
|
|
2020-10-01 11:10:53 +00:00
|
|
|
|
static void intel_dp_phy_pattern_update(struct intel_dp *intel_dp,
|
|
|
|
|
const struct intel_crtc_state *crtc_state)
|
2020-03-16 10:37:59 +00:00
|
|
|
|
{
|
|
|
|
|
struct drm_i915_private *dev_priv =
|
|
|
|
|
to_i915(dp_to_dig_port(intel_dp)->base.base.dev);
|
|
|
|
|
struct drm_dp_phy_test_params *data =
|
|
|
|
|
&intel_dp->compliance.test_data.phytest;
|
2020-10-01 11:10:53 +00:00
|
|
|
|
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
|
2020-03-16 10:37:59 +00:00
|
|
|
|
enum pipe pipe = crtc->pipe;
|
|
|
|
|
u32 pattern_val;
|
|
|
|
|
|
|
|
|
|
switch (data->phy_pattern) {
|
|
|
|
|
case DP_PHY_TEST_PATTERN_NONE:
|
2022-01-21 13:00:33 +00:00
|
|
|
|
drm_dbg_kms(&dev_priv->drm, "Disable Phy Test Pattern\n");
|
2020-03-16 10:37:59 +00:00
|
|
|
|
intel_de_write(dev_priv, DDI_DP_COMP_CTL(pipe), 0x0);
|
|
|
|
|
break;
|
|
|
|
|
case DP_PHY_TEST_PATTERN_D10_2:
|
2022-01-21 13:00:33 +00:00
|
|
|
|
drm_dbg_kms(&dev_priv->drm, "Set D10.2 Phy Test Pattern\n");
|
2020-03-16 10:37:59 +00:00
|
|
|
|
intel_de_write(dev_priv, DDI_DP_COMP_CTL(pipe),
|
|
|
|
|
DDI_DP_COMP_CTL_ENABLE | DDI_DP_COMP_CTL_D10_2);
|
|
|
|
|
break;
|
|
|
|
|
case DP_PHY_TEST_PATTERN_ERROR_COUNT:
|
2022-01-21 13:00:33 +00:00
|
|
|
|
drm_dbg_kms(&dev_priv->drm, "Set Error Count Phy Test Pattern\n");
|
2020-03-16 10:37:59 +00:00
|
|
|
|
intel_de_write(dev_priv, DDI_DP_COMP_CTL(pipe),
|
|
|
|
|
DDI_DP_COMP_CTL_ENABLE |
|
|
|
|
|
DDI_DP_COMP_CTL_SCRAMBLED_0);
|
|
|
|
|
break;
|
|
|
|
|
case DP_PHY_TEST_PATTERN_PRBS7:
|
2022-01-21 13:00:33 +00:00
|
|
|
|
drm_dbg_kms(&dev_priv->drm, "Set PRBS7 Phy Test Pattern\n");
|
2020-03-16 10:37:59 +00:00
|
|
|
|
intel_de_write(dev_priv, DDI_DP_COMP_CTL(pipe),
|
|
|
|
|
DDI_DP_COMP_CTL_ENABLE | DDI_DP_COMP_CTL_PRBS7);
|
|
|
|
|
break;
|
|
|
|
|
case DP_PHY_TEST_PATTERN_80BIT_CUSTOM:
|
|
|
|
|
/*
|
|
|
|
|
* FIXME: Ideally pattern should come from DPCD 0x250. As
|
|
|
|
|
* current firmware of DPR-100 could not set it, so hardcoding
|
|
|
|
|
* now for complaince test.
|
|
|
|
|
*/
|
2022-01-21 13:00:33 +00:00
|
|
|
|
drm_dbg_kms(&dev_priv->drm,
|
|
|
|
|
"Set 80Bit Custom Phy Test Pattern 0x3e0f83e0 0x0f83e0f8 0x0000f83e\n");
|
2020-03-16 10:37:59 +00:00
|
|
|
|
pattern_val = 0x3e0f83e0;
|
|
|
|
|
intel_de_write(dev_priv, DDI_DP_COMP_PAT(pipe, 0), pattern_val);
|
|
|
|
|
pattern_val = 0x0f83e0f8;
|
|
|
|
|
intel_de_write(dev_priv, DDI_DP_COMP_PAT(pipe, 1), pattern_val);
|
|
|
|
|
pattern_val = 0x0000f83e;
|
|
|
|
|
intel_de_write(dev_priv, DDI_DP_COMP_PAT(pipe, 2), pattern_val);
|
|
|
|
|
intel_de_write(dev_priv, DDI_DP_COMP_CTL(pipe),
|
|
|
|
|
DDI_DP_COMP_CTL_ENABLE |
|
|
|
|
|
DDI_DP_COMP_CTL_CUSTOM80);
|
|
|
|
|
break;
|
|
|
|
|
case DP_PHY_TEST_PATTERN_CP2520:
|
|
|
|
|
/*
|
|
|
|
|
* FIXME: Ideally pattern should come from DPCD 0x24A. As
|
|
|
|
|
* current firmware of DPR-100 could not set it, so hardcoding
|
|
|
|
|
* now for complaince test.
|
|
|
|
|
*/
|
2022-01-21 13:00:33 +00:00
|
|
|
|
drm_dbg_kms(&dev_priv->drm, "Set HBR2 compliance Phy Test Pattern\n");
|
2020-03-16 10:37:59 +00:00
|
|
|
|
pattern_val = 0xFB;
|
|
|
|
|
intel_de_write(dev_priv, DDI_DP_COMP_CTL(pipe),
|
|
|
|
|
DDI_DP_COMP_CTL_ENABLE | DDI_DP_COMP_CTL_HBR2 |
|
|
|
|
|
pattern_val);
|
|
|
|
|
break;
|
|
|
|
|
default:
|
|
|
|
|
WARN(1, "Invalid Phy Test Pattern\n");
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2020-10-01 11:10:53 +00:00
|
|
|
|
static void intel_dp_process_phy_request(struct intel_dp *intel_dp,
|
|
|
|
|
const struct intel_crtc_state *crtc_state)
|
2020-03-16 10:37:59 +00:00
|
|
|
|
{
|
2022-01-21 13:00:33 +00:00
|
|
|
|
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
|
2020-03-16 10:37:59 +00:00
|
|
|
|
struct drm_dp_phy_test_params *data =
|
|
|
|
|
&intel_dp->compliance.test_data.phytest;
|
|
|
|
|
u8 link_status[DP_LINK_STATUS_SIZE];
|
|
|
|
|
|
2020-10-07 17:09:17 +00:00
|
|
|
|
if (drm_dp_dpcd_read_phy_link_status(&intel_dp->aux, DP_PHY_DPRX,
|
|
|
|
|
link_status) < 0) {
|
2022-01-21 13:00:33 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm, "failed to get link status\n");
|
2020-03-16 10:37:59 +00:00
|
|
|
|
return;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/* retrieve vswing & pre-emphasis setting */
|
2020-10-07 17:09:17 +00:00
|
|
|
|
intel_dp_get_adjust_train(intel_dp, crtc_state, DP_PHY_DPRX,
|
|
|
|
|
link_status);
|
2020-03-16 10:37:59 +00:00
|
|
|
|
|
2020-12-29 17:22:01 +00:00
|
|
|
|
intel_dp_set_signal_levels(intel_dp, crtc_state, DP_PHY_DPRX);
|
2020-03-16 10:37:59 +00:00
|
|
|
|
|
2020-10-01 11:10:53 +00:00
|
|
|
|
intel_dp_phy_pattern_update(intel_dp, crtc_state);
|
2020-03-16 10:37:59 +00:00
|
|
|
|
|
2021-02-26 08:15:54 +00:00
|
|
|
|
drm_dp_dpcd_write(&intel_dp->aux, DP_TRAINING_LANE0_SET,
|
|
|
|
|
intel_dp->train_set, crtc_state->lane_count);
|
|
|
|
|
|
2020-03-16 10:37:59 +00:00
|
|
|
|
drm_dp_set_phy_test_pattern(&intel_dp->aux, data,
|
2023-12-13 21:15:42 +00:00
|
|
|
|
intel_dp->dpcd[DP_DPCD_REV]);
|
2020-03-16 10:37:59 +00:00
|
|
|
|
}
|
|
|
|
|
|
2019-01-16 09:15:27 +00:00
|
|
|
|
static u8 intel_dp_autotest_phy_pattern(struct intel_dp *intel_dp)
|
2011-10-20 22:09:17 +00:00
|
|
|
|
{
|
2022-01-21 13:00:33 +00:00
|
|
|
|
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
|
2020-09-30 10:04:12 +00:00
|
|
|
|
struct drm_dp_phy_test_params *data =
|
|
|
|
|
&intel_dp->compliance.test_data.phytest;
|
2020-03-16 10:37:56 +00:00
|
|
|
|
|
2020-09-30 10:04:12 +00:00
|
|
|
|
if (drm_dp_get_phy_test_pattern(&intel_dp->aux, data)) {
|
2022-01-21 13:00:33 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm, "DP Phy Test pattern AUX read failure\n");
|
2020-09-30 10:04:12 +00:00
|
|
|
|
return DP_TEST_NAK;
|
|
|
|
|
}
|
2020-03-16 10:37:56 +00:00
|
|
|
|
|
2020-09-30 10:04:12 +00:00
|
|
|
|
/* Set test active flag here so userspace doesn't interrupt things */
|
|
|
|
|
intel_dp->compliance.test_active = true;
|
2020-03-16 10:37:59 +00:00
|
|
|
|
|
2020-09-30 10:04:12 +00:00
|
|
|
|
return DP_TEST_ACK;
|
2015-04-15 15:38:38 +00:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static void intel_dp_handle_test_request(struct intel_dp *intel_dp)
|
|
|
|
|
{
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
|
2019-01-16 09:15:27 +00:00
|
|
|
|
u8 response = DP_TEST_NAK;
|
|
|
|
|
u8 request = 0;
|
2017-01-20 17:04:06 +00:00
|
|
|
|
int status;
|
2015-04-15 15:38:38 +00:00
|
|
|
|
|
2017-01-20 17:04:06 +00:00
|
|
|
|
status = drm_dp_dpcd_readb(&intel_dp->aux, DP_TEST_REQUEST, &request);
|
2015-04-15 15:38:38 +00:00
|
|
|
|
if (status <= 0) {
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm,
|
|
|
|
|
"Could not read test request from sink\n");
|
2015-04-15 15:38:38 +00:00
|
|
|
|
goto update_status;
|
|
|
|
|
}
|
|
|
|
|
|
2017-01-20 17:04:06 +00:00
|
|
|
|
switch (request) {
|
2015-04-15 15:38:38 +00:00
|
|
|
|
case DP_TEST_LINK_TRAINING:
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm, "LINK_TRAINING test requested\n");
|
2015-04-15 15:38:38 +00:00
|
|
|
|
response = intel_dp_autotest_link_training(intel_dp);
|
|
|
|
|
break;
|
|
|
|
|
case DP_TEST_LINK_VIDEO_PATTERN:
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm, "TEST_PATTERN test requested\n");
|
2015-04-15 15:38:38 +00:00
|
|
|
|
response = intel_dp_autotest_video_pattern(intel_dp);
|
|
|
|
|
break;
|
|
|
|
|
case DP_TEST_LINK_EDID_READ:
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm, "EDID test requested\n");
|
2015-04-15 15:38:38 +00:00
|
|
|
|
response = intel_dp_autotest_edid(intel_dp);
|
|
|
|
|
break;
|
|
|
|
|
case DP_TEST_LINK_PHY_TEST_PATTERN:
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm, "PHY_PATTERN test requested\n");
|
2015-04-15 15:38:38 +00:00
|
|
|
|
response = intel_dp_autotest_phy_pattern(intel_dp);
|
|
|
|
|
break;
|
|
|
|
|
default:
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm, "Invalid test request '%02x'\n",
|
|
|
|
|
request);
|
2015-04-15 15:38:38 +00:00
|
|
|
|
break;
|
|
|
|
|
}
|
|
|
|
|
|
2017-01-20 17:04:06 +00:00
|
|
|
|
if (response & DP_TEST_ACK)
|
|
|
|
|
intel_dp->compliance.test_type = request;
|
|
|
|
|
|
2015-04-15 15:38:38 +00:00
|
|
|
|
update_status:
|
2017-01-20 17:04:06 +00:00
|
|
|
|
status = drm_dp_dpcd_writeb(&intel_dp->aux, DP_TEST_RESPONSE, response);
|
2015-04-15 15:38:38 +00:00
|
|
|
|
if (status <= 0)
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm,
|
|
|
|
|
"Could not write test response to sink\n");
|
2011-10-20 22:09:17 +00:00
|
|
|
|
}
|
|
|
|
|
|
2022-02-03 09:03:55 +00:00
|
|
|
|
static bool intel_dp_link_ok(struct intel_dp *intel_dp,
|
|
|
|
|
u8 link_status[DP_LINK_STATUS_SIZE])
|
|
|
|
|
{
|
|
|
|
|
struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
|
|
|
|
|
struct drm_i915_private *i915 = to_i915(encoder->base.dev);
|
|
|
|
|
bool uhbr = intel_dp->link_rate >= 1000000;
|
|
|
|
|
bool ok;
|
|
|
|
|
|
|
|
|
|
if (uhbr)
|
|
|
|
|
ok = drm_dp_128b132b_lane_channel_eq_done(link_status,
|
|
|
|
|
intel_dp->lane_count);
|
|
|
|
|
else
|
|
|
|
|
ok = drm_dp_channel_eq_ok(link_status, intel_dp->lane_count);
|
|
|
|
|
|
|
|
|
|
if (ok)
|
|
|
|
|
return true;
|
|
|
|
|
|
|
|
|
|
intel_dp_dump_link_status(intel_dp, DP_PHY_DPRX, link_status);
|
|
|
|
|
drm_dbg_kms(&i915->drm,
|
|
|
|
|
"[ENCODER:%d:%s] %s link not ok, retraining\n",
|
|
|
|
|
encoder->base.base.id, encoder->base.name,
|
|
|
|
|
uhbr ? "128b/132b" : "8b/10b");
|
|
|
|
|
|
|
|
|
|
return false;
|
|
|
|
|
}
|
|
|
|
|
|
2021-01-11 08:11:04 +00:00
|
|
|
|
static void
|
2022-01-20 11:01:02 +00:00
|
|
|
|
intel_dp_mst_hpd_irq(struct intel_dp *intel_dp, u8 *esi, u8 *ack)
|
2021-01-11 08:11:04 +00:00
|
|
|
|
{
|
2022-01-20 11:01:02 +00:00
|
|
|
|
bool handled = false;
|
|
|
|
|
|
2023-04-17 09:08:12 +00:00
|
|
|
|
drm_dp_mst_hpd_irq_handle_event(&intel_dp->mst_mgr, esi, ack, &handled);
|
2021-01-11 08:11:04 +00:00
|
|
|
|
|
2022-01-12 11:03:13 +00:00
|
|
|
|
if (esi[1] & DP_CP_IRQ) {
|
|
|
|
|
intel_hdcp_handle_cp_irq(intel_dp->attached_connector);
|
2022-01-20 11:01:02 +00:00
|
|
|
|
ack[1] |= DP_CP_IRQ;
|
2022-01-12 11:03:13 +00:00
|
|
|
|
}
|
2021-01-11 08:11:04 +00:00
|
|
|
|
}
|
|
|
|
|
|
2022-01-12 11:03:17 +00:00
|
|
|
|
static bool intel_dp_mst_link_status(struct intel_dp *intel_dp)
|
2022-01-12 11:03:16 +00:00
|
|
|
|
{
|
|
|
|
|
struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
|
|
|
|
|
struct drm_i915_private *i915 = to_i915(encoder->base.dev);
|
2022-01-12 11:03:17 +00:00
|
|
|
|
u8 link_status[DP_LINK_STATUS_SIZE] = {};
|
|
|
|
|
const size_t esi_link_status_size = DP_LINK_STATUS_SIZE - 2;
|
|
|
|
|
|
|
|
|
|
if (drm_dp_dpcd_read(&intel_dp->aux, DP_LANE0_1_STATUS_ESI, link_status,
|
|
|
|
|
esi_link_status_size) != esi_link_status_size) {
|
|
|
|
|
drm_err(&i915->drm,
|
|
|
|
|
"[ENCODER:%d:%s] Failed to read link status\n",
|
|
|
|
|
encoder->base.base.id, encoder->base.name);
|
|
|
|
|
return false;
|
|
|
|
|
}
|
2022-01-12 11:03:16 +00:00
|
|
|
|
|
2022-02-03 09:03:55 +00:00
|
|
|
|
return intel_dp_link_ok(intel_dp, link_status);
|
2022-01-12 11:03:16 +00:00
|
|
|
|
}
|
|
|
|
|
|
2020-06-05 09:48:01 +00:00
|
|
|
|
/**
|
|
|
|
|
* intel_dp_check_mst_status - service any pending MST interrupts, check link status
|
|
|
|
|
* @intel_dp: Intel DP struct
|
|
|
|
|
*
|
|
|
|
|
* Read any pending MST interrupts, call MST core to handle these and ack the
|
|
|
|
|
* interrupts. Check if the main and AUX link state is ok.
|
|
|
|
|
*
|
|
|
|
|
* Returns:
|
|
|
|
|
* - %true if pending interrupts were serviced (or no interrupts were
|
|
|
|
|
* pending) w/o detecting an error condition.
|
|
|
|
|
* - %false if an error condition - like AUX failure or a loss of link - is
|
|
|
|
|
* detected, which needs servicing from the hotplug work.
|
|
|
|
|
*/
|
|
|
|
|
static bool
|
2014-05-02 04:02:48 +00:00
|
|
|
|
intel_dp_check_mst_status(struct intel_dp *intel_dp)
|
|
|
|
|
{
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
|
2020-06-05 09:48:01 +00:00
|
|
|
|
bool link_ok = true;
|
2020-04-17 15:27:33 +00:00
|
|
|
|
|
drm/i915/display/dp: Prefer drm_WARN* over WARN*
struct drm_device specific drm_WARN* macros include device information
in the backtrace, so we know what device the warnings originate from.
Prefer drm_WARN* over WARN* at places where struct intel_dp or struct
drm_i915_private pointer is available.
Conversion is done with below sementic patch:
@rule1@
identifier func, T;
@@
func(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-WARN_ON(
+drm_WARN_ON(&T->drm,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(&T->drm,
...)
)
...+>
}
@rule2@
identifier func, T;
@@
func(struct drm_i915_private *T,...) {
<+...
(
-WARN_ON(
+drm_WARN_ON(&T->drm,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(&T->drm,
...)
)
...+>
}
@rule3@
identifier func, T;
@@
func(struct intel_dp *T,...) {
+ struct drm_i915_private *i915 = dp_to_i915(T);
<+...
(
-WARN_ON(
+drm_WARN_ON(&i915->drm,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(&i915->drm,
...)
)
...+>
}
@rule4@
identifier func, T;
@@
func(...) {
...
struct intel_dp *T = ...;
+ struct drm_i915_private *i915 = dp_to_i915(T);
<+...
(
-WARN_ON(
+drm_WARN_ON(&i915->drm,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(&i915->drm,
...)
)
...+>
}
Signed-off-by: Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200504181600.18503-3-pankaj.laxminarayan.bharadiya@intel.com
2020-05-04 18:15:53 +00:00
|
|
|
|
drm_WARN_ON_ONCE(&i915->drm, intel_dp->active_mst_links < 0);
|
2020-04-17 15:27:33 +00:00
|
|
|
|
|
|
|
|
|
for (;;) {
|
2022-01-12 11:03:17 +00:00
|
|
|
|
u8 esi[4] = {};
|
2022-01-20 11:01:02 +00:00
|
|
|
|
u8 ack[4] = {};
|
2018-07-18 17:19:42 +00:00
|
|
|
|
|
2020-06-05 09:48:01 +00:00
|
|
|
|
if (!intel_dp_get_sink_irq_esi(intel_dp, esi)) {
|
2020-04-17 15:27:33 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm,
|
|
|
|
|
"failed to get ESI - device may have failed\n");
|
2020-06-05 09:48:01 +00:00
|
|
|
|
link_ok = false;
|
|
|
|
|
|
|
|
|
|
break;
|
2020-04-17 15:27:33 +00:00
|
|
|
|
}
|
2014-05-02 04:02:48 +00:00
|
|
|
|
|
2022-01-12 11:03:15 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm, "DPRX ESI: %4ph\n", esi);
|
|
|
|
|
|
2022-01-12 11:03:17 +00:00
|
|
|
|
if (intel_dp->active_mst_links > 0 && link_ok &&
|
|
|
|
|
esi[3] & LINK_STATUS_CHANGED) {
|
|
|
|
|
if (!intel_dp_mst_link_status(intel_dp))
|
2022-01-12 11:03:16 +00:00
|
|
|
|
link_ok = false;
|
2022-01-20 11:01:02 +00:00
|
|
|
|
ack[3] |= LINK_STATUS_CHANGED;
|
2020-04-17 15:27:33 +00:00
|
|
|
|
}
|
2014-05-02 04:02:48 +00:00
|
|
|
|
|
2022-01-20 11:01:02 +00:00
|
|
|
|
intel_dp_mst_hpd_irq(intel_dp, esi, ack);
|
2021-01-11 08:11:04 +00:00
|
|
|
|
|
2022-01-20 11:01:02 +00:00
|
|
|
|
if (!memchr_inv(ack, 0, sizeof(ack)))
|
2020-04-17 15:27:33 +00:00
|
|
|
|
break;
|
|
|
|
|
|
2022-01-20 11:01:02 +00:00
|
|
|
|
if (!intel_dp_ack_sink_irq_esi(intel_dp, ack))
|
2022-01-12 11:03:14 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm, "Failed to ack ESI\n");
|
2023-04-17 09:08:12 +00:00
|
|
|
|
|
|
|
|
|
if (ack[1] & (DP_DOWN_REP_MSG_RDY | DP_UP_REQ_MSG_RDY))
|
|
|
|
|
drm_dp_mst_hpd_irq_send_new_request(&intel_dp->mst_mgr);
|
2014-05-02 04:02:48 +00:00
|
|
|
|
}
|
2020-04-17 15:27:33 +00:00
|
|
|
|
|
2020-06-05 09:48:01 +00:00
|
|
|
|
return link_ok;
|
2014-05-02 04:02:48 +00:00
|
|
|
|
}
|
|
|
|
|
|
2020-12-18 10:37:19 +00:00
|
|
|
|
static void
|
|
|
|
|
intel_dp_handle_hdmi_link_status_change(struct intel_dp *intel_dp)
|
|
|
|
|
{
|
|
|
|
|
bool is_active;
|
|
|
|
|
u8 buf = 0;
|
|
|
|
|
|
|
|
|
|
is_active = drm_dp_pcon_hdmi_link_active(&intel_dp->aux);
|
|
|
|
|
if (intel_dp->frl.is_trained && !is_active) {
|
|
|
|
|
if (drm_dp_dpcd_readb(&intel_dp->aux, DP_PCON_HDMI_LINK_CONFIG_1, &buf) < 0)
|
|
|
|
|
return;
|
|
|
|
|
|
|
|
|
|
buf &= ~DP_PCON_ENABLE_HDMI_LINK;
|
|
|
|
|
if (drm_dp_dpcd_writeb(&intel_dp->aux, DP_PCON_HDMI_LINK_CONFIG_1, buf) < 0)
|
|
|
|
|
return;
|
|
|
|
|
|
|
|
|
|
drm_dp_pcon_hdmi_frl_link_error_count(&intel_dp->aux, &intel_dp->attached_connector->base);
|
|
|
|
|
|
2022-10-11 06:34:40 +00:00
|
|
|
|
intel_dp->frl.is_trained = false;
|
|
|
|
|
|
2020-12-18 10:37:19 +00:00
|
|
|
|
/* Restart FRL training or fall back to TMDS mode */
|
|
|
|
|
intel_dp_check_frl_training(intel_dp);
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2018-01-17 19:21:47 +00:00
|
|
|
|
static bool
|
|
|
|
|
intel_dp_needs_link_retrain(struct intel_dp *intel_dp)
|
|
|
|
|
{
|
|
|
|
|
u8 link_status[DP_LINK_STATUS_SIZE];
|
|
|
|
|
|
2018-01-17 19:21:49 +00:00
|
|
|
|
if (!intel_dp->link_trained)
|
2018-11-21 22:54:37 +00:00
|
|
|
|
return false;
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
|
* While PSR source HW is enabled, it will control main-link sending
|
|
|
|
|
* frames, enabling and disabling it so trying to do a retrain will fail
|
|
|
|
|
* as the link would or not be on or it could mix training patterns
|
|
|
|
|
* and frame data at the same time causing retrain to fail.
|
|
|
|
|
* Also when exiting PSR, HW will retrain the link anyways fixing
|
|
|
|
|
* any link status error.
|
|
|
|
|
*/
|
|
|
|
|
if (intel_psr_enabled(intel_dp))
|
2018-01-17 19:21:49 +00:00
|
|
|
|
return false;
|
|
|
|
|
|
2020-10-07 17:09:17 +00:00
|
|
|
|
if (drm_dp_dpcd_read_phy_link_status(&intel_dp->aux, DP_PHY_DPRX,
|
|
|
|
|
link_status) < 0)
|
2018-01-17 19:21:47 +00:00
|
|
|
|
return false;
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
|
* Validate the cached values of intel_dp->link_rate and
|
|
|
|
|
* intel_dp->lane_count before attempting to retrain.
|
2020-10-01 11:10:53 +00:00
|
|
|
|
*
|
|
|
|
|
* FIXME would be nice to user the crtc state here, but since
|
|
|
|
|
* we need to call this from the short HPD handler that seems
|
|
|
|
|
* a bit hard.
|
2018-01-17 19:21:47 +00:00
|
|
|
|
*/
|
|
|
|
|
if (!intel_dp_link_params_valid(intel_dp, intel_dp->link_rate,
|
|
|
|
|
intel_dp->lane_count))
|
|
|
|
|
return false;
|
|
|
|
|
|
2022-02-03 09:03:55 +00:00
|
|
|
|
/* Retrain if link not ok */
|
|
|
|
|
return !intel_dp_link_ok(intel_dp, link_status);
|
2018-01-17 19:21:47 +00:00
|
|
|
|
}
|
|
|
|
|
|
2020-04-17 15:27:34 +00:00
|
|
|
|
static bool intel_dp_has_connector(struct intel_dp *intel_dp,
|
|
|
|
|
const struct drm_connector_state *conn_state)
|
|
|
|
|
{
|
|
|
|
|
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
|
|
|
|
|
struct intel_encoder *encoder;
|
|
|
|
|
enum pipe pipe;
|
|
|
|
|
|
|
|
|
|
if (!conn_state->best_encoder)
|
|
|
|
|
return false;
|
|
|
|
|
|
|
|
|
|
/* SST */
|
|
|
|
|
encoder = &dp_to_dig_port(intel_dp)->base;
|
|
|
|
|
if (conn_state->best_encoder == &encoder->base)
|
|
|
|
|
return true;
|
|
|
|
|
|
|
|
|
|
/* MST */
|
|
|
|
|
for_each_pipe(i915, pipe) {
|
|
|
|
|
encoder = &intel_dp->mst_encoders[pipe]->base;
|
|
|
|
|
if (conn_state->best_encoder == &encoder->base)
|
|
|
|
|
return true;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
return false;
|
|
|
|
|
}
|
|
|
|
|
|
drm/i915/tc: Reset TypeC PHYs left enabled in DP-alt mode after the sink disconnects
If the output on a DP-alt link with its sink disconnected is kept
enabled for too long (about 20 sec), then some IOM/TCSS firmware timeout
will cause havoc on the PCI bus, at least for other GFX devices on it
which will stop powering up. Since user space is not guaranteed to do a
disabling modeset in time, switch such disconnected but active links to
TBT mode - which is without such shortcomings - with a 2 second delay.
If the above condition is detected already during the driver load/system
resume sanitization step disable the output instead, as at that point no
user space or kernel client depends on a consistent output state yet and
because subsequent atomic modeset on such connectors - without the
actual sink capabilities available - can fail.
An active/disconnected port as above will also block the HPD status of
other active/disconnected ports to get updated (stuck in the connected
state), until the former port is disabled, its PHY is disconnected and
a ~10 ms delay has elapsed. This means the link state for all TypeC
ports/CRTCs must be rechecked after a CRTC is disabled due to the above
reason. For this disconnect the PHY synchronously after the CRTC/port is
disabled and recheck all CRTCs for the above condition whenever such a
port is disabled.
To account for a race condition during driver loading where the sink is
disconnected after the above sanitization step and before the HPD
interrupts get enabled, do an explicit check/link reset if needed from
the encoder's late_register hook, which is called after the HPD
interrupts are enabled already.
v2:
- Handle an active/disconnected port blocking the HPD state update of
another active/disconnected port.
- Cancel the delayed work resetting the link also from the encoder
enable/suspend/shutdown hooks.
- Rebase on the earlier intel_modeset_lock_ctx_retry() addition,
fixing here the missed atomic state reset in case of a retry.
- Fix handling of an error return from intel_atomic_get_crtc_state().
- Recheck if the port needs to be reset after all the atomic state
is locked and async commits are waited on.
v3:
- Add intel_crtc_needs_link_reset(), instead of open-coding it,
keep intel_crtc_has_encoders(). (Ville)
- Fix state dumping and use a bitmask to track disabled CRTCs in
intel_sanitize_all_crtcs(). (Ville)
- Set internal in intel_atomic_state right after allocating it.
(Ville)
- Recheck all CRTCs (not yet force-disabled) after a CRTC is
force-disabled for any reason (not only due to a link state)
in intel_sanitize_all_crtcs().
- Reduce delay after CRTC disabling to 20ms, and use the simpler
msleep().
- Clarify code comment about HPD behaviour in
intel_sanitize_all_crtcs().
- Move all the TC link reset logic to intel_tc.c .
- Cancel the link reset work synchronously during system suspend,
driver unload and shutdown.
v4:
- Rebased on previous patch, which allows calling the TC port
suspend/cleanup handlers without modeset locks held; remove the
display driver suspended assert from the link reset work
accordingly.
v5: (Ville)
- Remove reset work canceling from intel_ddi_pre_pll_enable().
- Track a crtc vs. pipe mask in intel_sanitize_all_crtcs().
- Add reset_link_commit() to clarify the
intel_modeset_lock_ctx_retry loop.
Cc: Kai-Heng Feng <kai.heng.feng@canonical.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Tested-by: Kai-Heng Feng <kai.heng.feng@canonical.com>
Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/5860
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Imre Deak <imre.deak@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20230512195513.2699-2-imre.deak@intel.com
2023-05-12 19:55:13 +00:00
|
|
|
|
int intel_dp_get_active_pipes(struct intel_dp *intel_dp,
|
|
|
|
|
struct drm_modeset_acquire_ctx *ctx,
|
|
|
|
|
u8 *pipe_mask)
|
2020-04-17 15:27:34 +00:00
|
|
|
|
{
|
|
|
|
|
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
|
|
|
|
|
struct drm_connector_list_iter conn_iter;
|
|
|
|
|
struct intel_connector *connector;
|
|
|
|
|
int ret = 0;
|
|
|
|
|
|
2022-02-03 18:38:20 +00:00
|
|
|
|
*pipe_mask = 0;
|
2020-04-17 15:27:34 +00:00
|
|
|
|
|
|
|
|
|
drm_connector_list_iter_begin(&i915->drm, &conn_iter);
|
|
|
|
|
for_each_intel_connector_iter(connector, &conn_iter) {
|
|
|
|
|
struct drm_connector_state *conn_state =
|
|
|
|
|
connector->base.state;
|
|
|
|
|
struct intel_crtc_state *crtc_state;
|
|
|
|
|
struct intel_crtc *crtc;
|
|
|
|
|
|
|
|
|
|
if (!intel_dp_has_connector(intel_dp, conn_state))
|
|
|
|
|
continue;
|
|
|
|
|
|
|
|
|
|
crtc = to_intel_crtc(conn_state->crtc);
|
|
|
|
|
if (!crtc)
|
|
|
|
|
continue;
|
|
|
|
|
|
|
|
|
|
ret = drm_modeset_lock(&crtc->base.mutex, ctx);
|
|
|
|
|
if (ret)
|
|
|
|
|
break;
|
|
|
|
|
|
|
|
|
|
crtc_state = to_intel_crtc_state(crtc->base.state);
|
|
|
|
|
|
|
|
|
|
drm_WARN_ON(&i915->drm, !intel_crtc_has_dp_encoder(crtc_state));
|
|
|
|
|
|
|
|
|
|
if (!crtc_state->hw.active)
|
|
|
|
|
continue;
|
|
|
|
|
|
|
|
|
|
if (conn_state->commit &&
|
|
|
|
|
!try_wait_for_completion(&conn_state->commit->hw_done))
|
|
|
|
|
continue;
|
|
|
|
|
|
2022-02-03 18:38:20 +00:00
|
|
|
|
*pipe_mask |= BIT(crtc->pipe);
|
2020-04-17 15:27:34 +00:00
|
|
|
|
}
|
|
|
|
|
drm_connector_list_iter_end(&conn_iter);
|
|
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static bool intel_dp_is_connected(struct intel_dp *intel_dp)
|
|
|
|
|
{
|
|
|
|
|
struct intel_connector *connector = intel_dp->attached_connector;
|
|
|
|
|
|
|
|
|
|
return connector->base.status == connector_status_connected ||
|
|
|
|
|
intel_dp->is_mst;
|
|
|
|
|
}
|
|
|
|
|
|
2018-01-17 19:21:47 +00:00
|
|
|
|
int intel_dp_retrain_link(struct intel_encoder *encoder,
|
|
|
|
|
struct drm_modeset_acquire_ctx *ctx)
|
2016-10-14 17:02:54 +00:00
|
|
|
|
{
|
|
|
|
|
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
2019-12-04 18:05:43 +00:00
|
|
|
|
struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
|
2018-01-17 19:21:47 +00:00
|
|
|
|
struct intel_crtc *crtc;
|
2022-02-03 18:38:20 +00:00
|
|
|
|
u8 pipe_mask;
|
2018-01-17 19:21:47 +00:00
|
|
|
|
int ret;
|
|
|
|
|
|
2020-04-17 15:27:34 +00:00
|
|
|
|
if (!intel_dp_is_connected(intel_dp))
|
2018-01-17 19:21:47 +00:00
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
|
|
ret = drm_modeset_lock(&dev_priv->drm.mode_config.connection_mutex,
|
|
|
|
|
ctx);
|
|
|
|
|
if (ret)
|
|
|
|
|
return ret;
|
|
|
|
|
|
2023-05-10 10:31:28 +00:00
|
|
|
|
if (!intel_dp_needs_link_retrain(intel_dp))
|
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
|
|
ret = intel_dp_get_active_pipes(intel_dp, ctx, &pipe_mask);
|
2018-01-17 19:21:47 +00:00
|
|
|
|
if (ret)
|
|
|
|
|
return ret;
|
|
|
|
|
|
2022-02-03 18:38:20 +00:00
|
|
|
|
if (pipe_mask == 0)
|
2018-01-17 19:21:47 +00:00
|
|
|
|
return 0;
|
2023-05-10 10:31:28 +00:00
|
|
|
|
|
|
|
|
|
if (!intel_dp_needs_link_retrain(intel_dp))
|
|
|
|
|
return 0;
|
2018-01-17 19:21:47 +00:00
|
|
|
|
|
2020-04-17 15:27:34 +00:00
|
|
|
|
drm_dbg_kms(&dev_priv->drm, "[ENCODER:%d:%s] retraining link\n",
|
|
|
|
|
encoder->base.base.id, encoder->base.name);
|
2018-01-17 19:21:47 +00:00
|
|
|
|
|
2022-02-03 18:38:20 +00:00
|
|
|
|
for_each_intel_crtc_in_pipe_mask(&dev_priv->drm, crtc, pipe_mask) {
|
2020-04-17 15:27:34 +00:00
|
|
|
|
const struct intel_crtc_state *crtc_state =
|
|
|
|
|
to_intel_crtc_state(crtc->base.state);
|
2016-10-14 17:02:54 +00:00
|
|
|
|
|
2020-04-17 15:27:34 +00:00
|
|
|
|
/* Suppress underruns caused by re-training */
|
|
|
|
|
intel_set_cpu_fifo_underrun_reporting(dev_priv, crtc->pipe, false);
|
|
|
|
|
if (crtc_state->has_pch_encoder)
|
|
|
|
|
intel_set_pch_fifo_underrun_reporting(dev_priv,
|
|
|
|
|
intel_crtc_pch_transcoder(crtc), false);
|
|
|
|
|
}
|
2016-10-14 17:02:54 +00:00
|
|
|
|
|
2022-02-03 18:38:20 +00:00
|
|
|
|
for_each_intel_crtc_in_pipe_mask(&dev_priv->drm, crtc, pipe_mask) {
|
2020-10-01 11:10:53 +00:00
|
|
|
|
const struct intel_crtc_state *crtc_state =
|
|
|
|
|
to_intel_crtc_state(crtc->base.state);
|
|
|
|
|
|
|
|
|
|
/* retrain on the MST master transcoder */
|
2021-03-20 04:42:42 +00:00
|
|
|
|
if (DISPLAY_VER(dev_priv) >= 12 &&
|
2020-10-01 11:10:53 +00:00
|
|
|
|
intel_crtc_has_type(crtc_state, INTEL_OUTPUT_DP_MST) &&
|
|
|
|
|
!intel_dp_mst_is_master_trans(crtc_state))
|
|
|
|
|
continue;
|
|
|
|
|
|
2020-12-18 10:37:18 +00:00
|
|
|
|
intel_dp_check_frl_training(intel_dp);
|
2020-12-18 10:37:22 +00:00
|
|
|
|
intel_dp_pcon_dsc_configure(intel_dp, crtc_state);
|
2020-10-01 11:10:53 +00:00
|
|
|
|
intel_dp_start_link_train(intel_dp, crtc_state);
|
|
|
|
|
intel_dp_stop_link_train(intel_dp, crtc_state);
|
|
|
|
|
break;
|
|
|
|
|
}
|
2016-10-14 17:02:54 +00:00
|
|
|
|
|
2022-02-03 18:38:20 +00:00
|
|
|
|
for_each_intel_crtc_in_pipe_mask(&dev_priv->drm, crtc, pipe_mask) {
|
2020-04-17 15:27:34 +00:00
|
|
|
|
const struct intel_crtc_state *crtc_state =
|
|
|
|
|
to_intel_crtc_state(crtc->base.state);
|
2016-10-14 17:02:54 +00:00
|
|
|
|
|
2020-04-17 15:27:34 +00:00
|
|
|
|
/* Keep underrun reporting disabled until things are stable */
|
2021-12-01 13:57:03 +00:00
|
|
|
|
intel_crtc_wait_for_next_vblank(crtc);
|
2020-04-17 15:27:34 +00:00
|
|
|
|
|
|
|
|
|
intel_set_cpu_fifo_underrun_reporting(dev_priv, crtc->pipe, true);
|
|
|
|
|
if (crtc_state->has_pch_encoder)
|
|
|
|
|
intel_set_pch_fifo_underrun_reporting(dev_priv,
|
|
|
|
|
intel_crtc_pch_transcoder(crtc), true);
|
|
|
|
|
}
|
2018-01-17 19:21:47 +00:00
|
|
|
|
|
|
|
|
|
return 0;
|
2016-10-14 17:02:54 +00:00
|
|
|
|
}
|
|
|
|
|
|
2020-09-30 10:04:12 +00:00
|
|
|
|
static int intel_dp_prep_phy_test(struct intel_dp *intel_dp,
|
|
|
|
|
struct drm_modeset_acquire_ctx *ctx,
|
2022-02-03 18:38:20 +00:00
|
|
|
|
u8 *pipe_mask)
|
2020-09-30 10:04:12 +00:00
|
|
|
|
{
|
|
|
|
|
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
|
|
|
|
|
struct drm_connector_list_iter conn_iter;
|
|
|
|
|
struct intel_connector *connector;
|
|
|
|
|
int ret = 0;
|
|
|
|
|
|
2022-02-03 18:38:20 +00:00
|
|
|
|
*pipe_mask = 0;
|
2020-09-30 10:04:12 +00:00
|
|
|
|
|
|
|
|
|
drm_connector_list_iter_begin(&i915->drm, &conn_iter);
|
|
|
|
|
for_each_intel_connector_iter(connector, &conn_iter) {
|
|
|
|
|
struct drm_connector_state *conn_state =
|
|
|
|
|
connector->base.state;
|
|
|
|
|
struct intel_crtc_state *crtc_state;
|
|
|
|
|
struct intel_crtc *crtc;
|
|
|
|
|
|
|
|
|
|
if (!intel_dp_has_connector(intel_dp, conn_state))
|
|
|
|
|
continue;
|
|
|
|
|
|
|
|
|
|
crtc = to_intel_crtc(conn_state->crtc);
|
|
|
|
|
if (!crtc)
|
|
|
|
|
continue;
|
|
|
|
|
|
|
|
|
|
ret = drm_modeset_lock(&crtc->base.mutex, ctx);
|
|
|
|
|
if (ret)
|
|
|
|
|
break;
|
|
|
|
|
|
|
|
|
|
crtc_state = to_intel_crtc_state(crtc->base.state);
|
|
|
|
|
|
|
|
|
|
drm_WARN_ON(&i915->drm, !intel_crtc_has_dp_encoder(crtc_state));
|
|
|
|
|
|
|
|
|
|
if (!crtc_state->hw.active)
|
|
|
|
|
continue;
|
|
|
|
|
|
|
|
|
|
if (conn_state->commit &&
|
|
|
|
|
!try_wait_for_completion(&conn_state->commit->hw_done))
|
|
|
|
|
continue;
|
|
|
|
|
|
2022-02-03 18:38:20 +00:00
|
|
|
|
*pipe_mask |= BIT(crtc->pipe);
|
2020-09-30 10:04:12 +00:00
|
|
|
|
}
|
|
|
|
|
drm_connector_list_iter_end(&conn_iter);
|
|
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static int intel_dp_do_phy_test(struct intel_encoder *encoder,
|
|
|
|
|
struct drm_modeset_acquire_ctx *ctx)
|
|
|
|
|
{
|
|
|
|
|
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
|
|
|
|
struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
|
2020-10-01 11:10:53 +00:00
|
|
|
|
struct intel_crtc *crtc;
|
2022-02-03 18:38:20 +00:00
|
|
|
|
u8 pipe_mask;
|
2020-09-30 10:04:12 +00:00
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
|
|
ret = drm_modeset_lock(&dev_priv->drm.mode_config.connection_mutex,
|
|
|
|
|
ctx);
|
|
|
|
|
if (ret)
|
|
|
|
|
return ret;
|
|
|
|
|
|
2022-02-03 18:38:20 +00:00
|
|
|
|
ret = intel_dp_prep_phy_test(intel_dp, ctx, &pipe_mask);
|
2020-09-30 10:04:12 +00:00
|
|
|
|
if (ret)
|
|
|
|
|
return ret;
|
|
|
|
|
|
2022-02-03 18:38:20 +00:00
|
|
|
|
if (pipe_mask == 0)
|
2020-09-30 10:04:12 +00:00
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
|
|
drm_dbg_kms(&dev_priv->drm, "[ENCODER:%d:%s] PHY test\n",
|
|
|
|
|
encoder->base.base.id, encoder->base.name);
|
2020-10-01 11:10:53 +00:00
|
|
|
|
|
2022-02-03 18:38:20 +00:00
|
|
|
|
for_each_intel_crtc_in_pipe_mask(&dev_priv->drm, crtc, pipe_mask) {
|
2020-10-01 11:10:53 +00:00
|
|
|
|
const struct intel_crtc_state *crtc_state =
|
|
|
|
|
to_intel_crtc_state(crtc->base.state);
|
|
|
|
|
|
|
|
|
|
/* test on the MST master transcoder */
|
2021-03-20 04:42:42 +00:00
|
|
|
|
if (DISPLAY_VER(dev_priv) >= 12 &&
|
2020-10-01 11:10:53 +00:00
|
|
|
|
intel_crtc_has_type(crtc_state, INTEL_OUTPUT_DP_MST) &&
|
|
|
|
|
!intel_dp_mst_is_master_trans(crtc_state))
|
|
|
|
|
continue;
|
|
|
|
|
|
|
|
|
|
intel_dp_process_phy_request(intel_dp, crtc_state);
|
|
|
|
|
break;
|
|
|
|
|
}
|
2020-09-30 10:04:12 +00:00
|
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
}
|
|
|
|
|
|
2021-01-14 20:50:45 +00:00
|
|
|
|
void intel_dp_phy_test(struct intel_encoder *encoder)
|
2020-09-30 10:04:12 +00:00
|
|
|
|
{
|
|
|
|
|
struct drm_modeset_acquire_ctx ctx;
|
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
|
|
drm_modeset_acquire_init(&ctx, 0);
|
|
|
|
|
|
|
|
|
|
for (;;) {
|
|
|
|
|
ret = intel_dp_do_phy_test(encoder, &ctx);
|
|
|
|
|
|
|
|
|
|
if (ret == -EDEADLK) {
|
|
|
|
|
drm_modeset_backoff(&ctx);
|
|
|
|
|
continue;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
break;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
drm_modeset_drop_locks(&ctx);
|
|
|
|
|
drm_modeset_acquire_fini(&ctx);
|
|
|
|
|
drm_WARN(encoder->base.dev, ret,
|
|
|
|
|
"Acquiring modeset locks failed with %i\n", ret);
|
|
|
|
|
}
|
|
|
|
|
|
2020-12-18 10:37:19 +00:00
|
|
|
|
static void intel_dp_check_device_service_irq(struct intel_dp *intel_dp)
|
2018-09-27 20:57:35 +00:00
|
|
|
|
{
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
|
2018-09-27 20:57:35 +00:00
|
|
|
|
u8 val;
|
|
|
|
|
|
|
|
|
|
if (intel_dp->dpcd[DP_DPCD_REV] < 0x11)
|
|
|
|
|
return;
|
|
|
|
|
|
|
|
|
|
if (drm_dp_dpcd_readb(&intel_dp->aux,
|
|
|
|
|
DP_DEVICE_SERVICE_IRQ_VECTOR, &val) != 1 || !val)
|
|
|
|
|
return;
|
|
|
|
|
|
|
|
|
|
drm_dp_dpcd_writeb(&intel_dp->aux, DP_DEVICE_SERVICE_IRQ_VECTOR, val);
|
|
|
|
|
|
|
|
|
|
if (val & DP_AUTOMATED_TEST_REQUEST)
|
|
|
|
|
intel_dp_handle_test_request(intel_dp);
|
|
|
|
|
|
2018-10-23 09:22:28 +00:00
|
|
|
|
if (val & DP_CP_IRQ)
|
2019-02-16 17:36:52 +00:00
|
|
|
|
intel_hdcp_handle_cp_irq(intel_dp->attached_connector);
|
2018-10-23 09:22:28 +00:00
|
|
|
|
|
|
|
|
|
if (val & DP_SINK_SPECIFIC_IRQ)
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm, "Sink specific irq unhandled\n");
|
2018-09-27 20:57:35 +00:00
|
|
|
|
}
|
|
|
|
|
|
2020-12-18 10:37:19 +00:00
|
|
|
|
static void intel_dp_check_link_service_irq(struct intel_dp *intel_dp)
|
|
|
|
|
{
|
|
|
|
|
u8 val;
|
|
|
|
|
|
|
|
|
|
if (intel_dp->dpcd[DP_DPCD_REV] < 0x11)
|
|
|
|
|
return;
|
|
|
|
|
|
|
|
|
|
if (drm_dp_dpcd_readb(&intel_dp->aux,
|
2021-08-12 13:11:07 +00:00
|
|
|
|
DP_LINK_SERVICE_IRQ_VECTOR_ESI0, &val) != 1 || !val)
|
2020-12-18 10:37:19 +00:00
|
|
|
|
return;
|
|
|
|
|
|
|
|
|
|
if (drm_dp_dpcd_writeb(&intel_dp->aux,
|
2021-08-12 13:11:07 +00:00
|
|
|
|
DP_LINK_SERVICE_IRQ_VECTOR_ESI0, val) != 1)
|
2020-12-18 10:37:19 +00:00
|
|
|
|
return;
|
|
|
|
|
|
|
|
|
|
if (val & HDMI_LINK_STATUS_CHANGED)
|
|
|
|
|
intel_dp_handle_hdmi_link_status_change(intel_dp);
|
|
|
|
|
}
|
|
|
|
|
|
2009-04-07 23:16:42 +00:00
|
|
|
|
/*
|
|
|
|
|
* According to DP spec
|
|
|
|
|
* 5.1.2:
|
|
|
|
|
* 1. Read DPCD
|
|
|
|
|
* 2. Configure link according to Receiver Capabilities
|
|
|
|
|
* 3. Use Link Training from 2.5.3.3 and 3.5.1.3
|
|
|
|
|
* 4. Check link status on receipt of hot-plug interrupt
|
2016-03-30 12:35:26 +00:00
|
|
|
|
*
|
|
|
|
|
* intel_dp_short_pulse - handles short pulse interrupts
|
|
|
|
|
* when full detection is not required.
|
|
|
|
|
* Returns %true if short pulse is handled and full detection
|
|
|
|
|
* is NOT required and %false otherwise.
|
2009-04-07 23:16:42 +00:00
|
|
|
|
*/
|
2016-03-30 12:35:26 +00:00
|
|
|
|
static bool
|
2016-03-30 12:35:24 +00:00
|
|
|
|
intel_dp_short_pulse(struct intel_dp *intel_dp)
|
2009-04-07 23:16:42 +00:00
|
|
|
|
{
|
2018-08-27 22:30:20 +00:00
|
|
|
|
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
|
2016-03-30 12:35:26 +00:00
|
|
|
|
u8 old_sink_count = intel_dp->sink_count;
|
|
|
|
|
bool ret;
|
2014-08-05 00:40:20 +00:00
|
|
|
|
|
2015-10-28 10:00:36 +00:00
|
|
|
|
/*
|
|
|
|
|
* Clearing compliance test variables to allow capturing
|
|
|
|
|
* of values for next automated test request.
|
|
|
|
|
*/
|
2016-12-10 00:22:50 +00:00
|
|
|
|
memset(&intel_dp->compliance, 0, sizeof(intel_dp->compliance));
|
2015-10-28 10:00:36 +00:00
|
|
|
|
|
2016-03-30 12:35:26 +00:00
|
|
|
|
/*
|
|
|
|
|
* Now read the DPCD to see if it's actually running
|
|
|
|
|
* If the current value of sink count doesn't match with
|
|
|
|
|
* the value that was stored earlier or dpcd read failed
|
|
|
|
|
* we need to do full detection
|
|
|
|
|
*/
|
|
|
|
|
ret = intel_dp_get_dpcd(intel_dp);
|
|
|
|
|
|
|
|
|
|
if ((old_sink_count != intel_dp->sink_count) || !ret) {
|
|
|
|
|
/* No need to proceed if we are going to do full detect */
|
|
|
|
|
return false;
|
2011-07-07 18:10:59 +00:00
|
|
|
|
}
|
|
|
|
|
|
2020-12-18 10:37:19 +00:00
|
|
|
|
intel_dp_check_device_service_irq(intel_dp);
|
|
|
|
|
intel_dp_check_link_service_irq(intel_dp);
|
2011-10-20 22:09:17 +00:00
|
|
|
|
|
2018-07-11 13:29:09 +00:00
|
|
|
|
/* Handle CEC interrupts, if any */
|
|
|
|
|
drm_dp_cec_irq(&intel_dp->aux);
|
|
|
|
|
|
2018-01-17 19:21:47 +00:00
|
|
|
|
/* defer to the hotplug work for link retraining if needed */
|
|
|
|
|
if (intel_dp_needs_link_retrain(intel_dp))
|
|
|
|
|
return false;
|
2017-11-13 16:01:40 +00:00
|
|
|
|
|
2018-06-26 20:16:41 +00:00
|
|
|
|
intel_psr_short_pulse(intel_dp);
|
|
|
|
|
|
2020-09-30 10:04:12 +00:00
|
|
|
|
switch (intel_dp->compliance.test_type) {
|
|
|
|
|
case DP_TEST_LINK_TRAINING:
|
drm/i915/dp: conversion to struct drm_device logging macros.
This converts various instances of printk based logging macros in
i915/display/intel_dp.c with the new struct drm_device based logging
macros using the following coccinelle script:
@rule1@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@rule2@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
New checkpatch warnings were fixed manually.
v2: fix merge conflict with new changes in file.
Signed-off-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200122110844.2022-5-wambui.karugax@gmail.com
2020-01-22 11:08:42 +00:00
|
|
|
|
drm_dbg_kms(&dev_priv->drm,
|
|
|
|
|
"Link Training Compliance Test requested\n");
|
2017-01-24 16:16:34 +00:00
|
|
|
|
/* Send a Hotplug Uevent to userspace to start modeset */
|
2017-11-09 15:27:58 +00:00
|
|
|
|
drm_kms_helper_hotplug_event(&dev_priv->drm);
|
2020-09-30 10:04:12 +00:00
|
|
|
|
break;
|
|
|
|
|
case DP_TEST_LINK_PHY_TEST_PATTERN:
|
|
|
|
|
drm_dbg_kms(&dev_priv->drm,
|
|
|
|
|
"PHY test pattern Compliance Test requested\n");
|
|
|
|
|
/*
|
|
|
|
|
* Schedule long hpd to do the test
|
|
|
|
|
*
|
|
|
|
|
* FIXME get rid of the ad-hoc phy test modeset code
|
|
|
|
|
* and properly incorporate it into the normal modeset.
|
|
|
|
|
*/
|
|
|
|
|
return false;
|
2017-01-24 16:16:34 +00:00
|
|
|
|
}
|
2016-03-30 12:35:26 +00:00
|
|
|
|
|
|
|
|
|
return true;
|
2009-04-07 23:16:42 +00:00
|
|
|
|
}
|
|
|
|
|
|
2012-09-18 14:58:50 +00:00
|
|
|
|
/* XXX this is probably wrong for multiple downstream ports */
|
2011-07-12 21:38:04 +00:00
|
|
|
|
static enum drm_connector_status
|
2011-07-26 03:01:09 +00:00
|
|
|
|
intel_dp_detect_dpcd(struct intel_dp *intel_dp)
|
2011-07-12 21:38:04 +00:00
|
|
|
|
{
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
|
2020-06-10 07:55:10 +00:00
|
|
|
|
struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
|
2019-01-16 09:15:27 +00:00
|
|
|
|
u8 *dpcd = intel_dp->dpcd;
|
|
|
|
|
u8 type;
|
2012-09-18 14:58:50 +00:00
|
|
|
|
|
drm/i915/display/dp: Prefer drm_WARN* over WARN*
struct drm_device specific drm_WARN* macros include device information
in the backtrace, so we know what device the warnings originate from.
Prefer drm_WARN* over WARN* at places where struct intel_dp or struct
drm_i915_private pointer is available.
Conversion is done with below sementic patch:
@rule1@
identifier func, T;
@@
func(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-WARN_ON(
+drm_WARN_ON(&T->drm,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(&T->drm,
...)
)
...+>
}
@rule2@
identifier func, T;
@@
func(struct drm_i915_private *T,...) {
<+...
(
-WARN_ON(
+drm_WARN_ON(&T->drm,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(&T->drm,
...)
)
...+>
}
@rule3@
identifier func, T;
@@
func(struct intel_dp *T,...) {
+ struct drm_i915_private *i915 = dp_to_i915(T);
<+...
(
-WARN_ON(
+drm_WARN_ON(&i915->drm,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(&i915->drm,
...)
)
...+>
}
@rule4@
identifier func, T;
@@
func(...) {
...
struct intel_dp *T = ...;
+ struct drm_i915_private *i915 = dp_to_i915(T);
<+...
(
-WARN_ON(
+drm_WARN_ON(&i915->drm,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(&i915->drm,
...)
)
...+>
}
Signed-off-by: Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200504181600.18503-3-pankaj.laxminarayan.bharadiya@intel.com
2020-05-04 18:15:53 +00:00
|
|
|
|
if (drm_WARN_ON(&i915->drm, intel_dp_is_edp(intel_dp)))
|
2019-05-09 17:34:41 +00:00
|
|
|
|
return connector_status_connected;
|
|
|
|
|
|
2020-06-10 07:55:10 +00:00
|
|
|
|
lspcon_resume(dig_port);
|
2017-02-22 15:10:52 +00:00
|
|
|
|
|
2012-09-18 14:58:50 +00:00
|
|
|
|
if (!intel_dp_get_dpcd(intel_dp))
|
|
|
|
|
return connector_status_disconnected;
|
|
|
|
|
|
|
|
|
|
/* if there's no downstream port, we're done */
|
2016-10-24 16:33:24 +00:00
|
|
|
|
if (!drm_dp_is_branch(dpcd))
|
2011-07-26 03:01:09 +00:00
|
|
|
|
return connector_status_connected;
|
2012-09-18 14:58:50 +00:00
|
|
|
|
|
|
|
|
|
/* If we're HPD-aware, SINK_COUNT changes dynamically */
|
2020-08-26 18:24:51 +00:00
|
|
|
|
if (intel_dp_has_sink_count(intel_dp) &&
|
2013-09-27 11:48:42 +00:00
|
|
|
|
intel_dp->downstream_ports[0] & DP_DS_PORT_HPD) {
|
2016-03-30 12:35:25 +00:00
|
|
|
|
return intel_dp->sink_count ?
|
|
|
|
|
connector_status_connected : connector_status_disconnected;
|
2012-09-18 14:58:50 +00:00
|
|
|
|
}
|
|
|
|
|
|
2016-07-29 13:51:16 +00:00
|
|
|
|
if (intel_dp_can_mst(intel_dp))
|
|
|
|
|
return connector_status_connected;
|
|
|
|
|
|
2012-09-18 14:58:50 +00:00
|
|
|
|
/* If no HPD, poke DDC gently */
|
2014-03-14 14:51:17 +00:00
|
|
|
|
if (drm_probe_ddc(&intel_dp->aux.ddc))
|
2011-07-26 03:01:09 +00:00
|
|
|
|
return connector_status_connected;
|
2012-09-18 14:58:50 +00:00
|
|
|
|
|
|
|
|
|
/* Well we tried, say unknown for unreliable port types */
|
2013-09-27 11:48:42 +00:00
|
|
|
|
if (intel_dp->dpcd[DP_DPCD_REV] >= 0x11) {
|
|
|
|
|
type = intel_dp->downstream_ports[0] & DP_DS_PORT_TYPE_MASK;
|
|
|
|
|
if (type == DP_DS_PORT_TYPE_VGA ||
|
|
|
|
|
type == DP_DS_PORT_TYPE_NON_EDID)
|
|
|
|
|
return connector_status_unknown;
|
|
|
|
|
} else {
|
|
|
|
|
type = intel_dp->dpcd[DP_DOWNSTREAMPORT_PRESENT] &
|
|
|
|
|
DP_DWN_STRM_PORT_TYPE_MASK;
|
|
|
|
|
if (type == DP_DWN_STRM_PORT_TYPE_ANALOG ||
|
|
|
|
|
type == DP_DWN_STRM_PORT_TYPE_OTHER)
|
|
|
|
|
return connector_status_unknown;
|
|
|
|
|
}
|
2012-09-18 14:58:50 +00:00
|
|
|
|
|
|
|
|
|
/* Anything else is out of spec, warn and ignore */
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm, "Broken DP branch device, ignoring\n");
|
2011-07-26 03:01:09 +00:00
|
|
|
|
return connector_status_disconnected;
|
2011-07-12 21:38:04 +00:00
|
|
|
|
}
|
|
|
|
|
|
2014-09-02 19:03:59 +00:00
|
|
|
|
static enum drm_connector_status
|
|
|
|
|
edp_detect(struct intel_dp *intel_dp)
|
|
|
|
|
{
|
2018-07-17 17:42:15 +00:00
|
|
|
|
return connector_status_connected;
|
2014-09-02 19:03:59 +00:00
|
|
|
|
}
|
|
|
|
|
|
2015-08-20 07:47:39 +00:00
|
|
|
|
/*
|
|
|
|
|
* intel_digital_port_connected - is the specified port connected?
|
2018-01-29 23:22:20 +00:00
|
|
|
|
* @encoder: intel_encoder
|
2015-08-20 07:47:39 +00:00
|
|
|
|
*
|
2018-08-01 17:34:41 +00:00
|
|
|
|
* In cases where there's a connector physically connected but it can't be used
|
|
|
|
|
* by our hardware we also return false, since the rest of the driver should
|
|
|
|
|
* pretty much treat the port as disconnected. This is relevant for type-C
|
|
|
|
|
* (starting on ICL) where there's ownership involved.
|
|
|
|
|
*
|
2018-01-29 23:22:20 +00:00
|
|
|
|
* Return %true if port is connected, %false otherwise.
|
2015-08-20 07:47:39 +00:00
|
|
|
|
*/
|
2019-05-09 17:34:42 +00:00
|
|
|
|
bool intel_digital_port_connected(struct intel_encoder *encoder)
|
|
|
|
|
{
|
|
|
|
|
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
|
2020-03-11 15:54:20 +00:00
|
|
|
|
struct intel_digital_port *dig_port = enc_to_dig_port(encoder);
|
2019-05-17 10:22:24 +00:00
|
|
|
|
bool is_connected = false;
|
2019-05-09 17:34:42 +00:00
|
|
|
|
intel_wakeref_t wakeref;
|
|
|
|
|
|
|
|
|
|
with_intel_display_power(dev_priv, POWER_DOMAIN_DISPLAY_CORE, wakeref)
|
2020-03-11 15:54:20 +00:00
|
|
|
|
is_connected = dig_port->connected(encoder);
|
2019-05-09 17:34:42 +00:00
|
|
|
|
|
|
|
|
|
return is_connected;
|
|
|
|
|
}
|
|
|
|
|
|
2023-01-25 11:10:49 +00:00
|
|
|
|
static const struct drm_edid *
|
2014-09-02 19:04:00 +00:00
|
|
|
|
intel_dp_get_edid(struct intel_dp *intel_dp)
|
2011-09-28 23:38:44 +00:00
|
|
|
|
{
|
2023-01-25 11:10:52 +00:00
|
|
|
|
struct intel_connector *connector = intel_dp->attached_connector;
|
|
|
|
|
const struct drm_edid *fixed_edid = connector->panel.fixed_edid;
|
2012-06-14 19:28:33 +00:00
|
|
|
|
|
2023-01-25 11:10:52 +00:00
|
|
|
|
/* Use panel fixed edid if we have one */
|
|
|
|
|
if (fixed_edid) {
|
2012-10-19 11:51:52 +00:00
|
|
|
|
/* invalid edid */
|
2023-01-25 11:10:52 +00:00
|
|
|
|
if (IS_ERR(fixed_edid))
|
2012-06-14 19:28:33 +00:00
|
|
|
|
return NULL;
|
|
|
|
|
|
2023-01-25 11:10:52 +00:00
|
|
|
|
return drm_edid_dup(fixed_edid);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
return drm_edid_read_ddc(&connector->base, &intel_dp->aux.ddc);
|
2014-09-02 19:04:00 +00:00
|
|
|
|
}
|
2011-09-28 23:38:44 +00:00
|
|
|
|
|
2014-09-02 19:04:00 +00:00
|
|
|
|
static void
|
2020-09-04 11:53:54 +00:00
|
|
|
|
intel_dp_update_dfp(struct intel_dp *intel_dp,
|
2023-01-25 11:10:49 +00:00
|
|
|
|
const struct drm_edid *drm_edid)
|
2014-09-02 19:04:00 +00:00
|
|
|
|
{
|
2020-09-04 11:53:41 +00:00
|
|
|
|
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
|
|
|
|
|
struct intel_connector *connector = intel_dp->attached_connector;
|
|
|
|
|
|
|
|
|
|
intel_dp->dfp.max_bpc =
|
|
|
|
|
drm_dp_downstream_max_bpc(intel_dp->dpcd,
|
2023-10-04 16:21:49 +00:00
|
|
|
|
intel_dp->downstream_ports, drm_edid);
|
2020-09-04 11:53:41 +00:00
|
|
|
|
|
2020-09-04 11:53:45 +00:00
|
|
|
|
intel_dp->dfp.max_dotclock =
|
|
|
|
|
drm_dp_downstream_max_dotclock(intel_dp->dpcd,
|
|
|
|
|
intel_dp->downstream_ports);
|
|
|
|
|
|
2020-09-04 11:53:47 +00:00
|
|
|
|
intel_dp->dfp.min_tmds_clock =
|
|
|
|
|
drm_dp_downstream_min_tmds_clock(intel_dp->dpcd,
|
|
|
|
|
intel_dp->downstream_ports,
|
2023-10-04 16:21:49 +00:00
|
|
|
|
drm_edid);
|
2020-09-04 11:53:47 +00:00
|
|
|
|
intel_dp->dfp.max_tmds_clock =
|
|
|
|
|
drm_dp_downstream_max_tmds_clock(intel_dp->dpcd,
|
|
|
|
|
intel_dp->downstream_ports,
|
2023-10-04 16:21:49 +00:00
|
|
|
|
drm_edid);
|
2020-09-04 11:53:47 +00:00
|
|
|
|
|
2020-12-18 10:37:16 +00:00
|
|
|
|
intel_dp->dfp.pcon_max_frl_bw =
|
|
|
|
|
drm_dp_get_pcon_max_frl_bw(intel_dp->dpcd,
|
|
|
|
|
intel_dp->downstream_ports);
|
|
|
|
|
|
2020-09-04 11:53:45 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm,
|
2020-12-18 10:37:16 +00:00
|
|
|
|
"[CONNECTOR:%d:%s] DFP max bpc %d, max dotclock %d, TMDS clock %d-%d, PCON Max FRL BW %dGbps\n",
|
2020-09-04 11:53:41 +00:00
|
|
|
|
connector->base.base.id, connector->base.name,
|
2020-09-04 11:53:47 +00:00
|
|
|
|
intel_dp->dfp.max_bpc,
|
|
|
|
|
intel_dp->dfp.max_dotclock,
|
|
|
|
|
intel_dp->dfp.min_tmds_clock,
|
2020-12-18 10:37:16 +00:00
|
|
|
|
intel_dp->dfp.max_tmds_clock,
|
|
|
|
|
intel_dp->dfp.pcon_max_frl_bw);
|
2020-12-18 10:37:20 +00:00
|
|
|
|
|
|
|
|
|
intel_dp_get_pcon_dsc_cap(intel_dp);
|
2020-09-04 11:53:54 +00:00
|
|
|
|
}
|
|
|
|
|
|
2023-04-27 12:56:01 +00:00
|
|
|
|
static bool
|
|
|
|
|
intel_dp_can_ycbcr420(struct intel_dp *intel_dp)
|
|
|
|
|
{
|
|
|
|
|
if (source_can_output(intel_dp, INTEL_OUTPUT_FORMAT_YCBCR420) &&
|
|
|
|
|
(!drm_dp_is_branch(intel_dp->dpcd) || intel_dp->dfp.ycbcr420_passthrough))
|
|
|
|
|
return true;
|
|
|
|
|
|
|
|
|
|
if (source_can_output(intel_dp, INTEL_OUTPUT_FORMAT_RGB) &&
|
|
|
|
|
dfp_can_convert_from_rgb(intel_dp, INTEL_OUTPUT_FORMAT_YCBCR420))
|
|
|
|
|
return true;
|
|
|
|
|
|
|
|
|
|
if (source_can_output(intel_dp, INTEL_OUTPUT_FORMAT_YCBCR444) &&
|
|
|
|
|
dfp_can_convert_from_ycbcr444(intel_dp, INTEL_OUTPUT_FORMAT_YCBCR420))
|
|
|
|
|
return true;
|
|
|
|
|
|
|
|
|
|
return false;
|
|
|
|
|
}
|
|
|
|
|
|
2020-09-04 11:53:54 +00:00
|
|
|
|
static void
|
|
|
|
|
intel_dp_update_420(struct intel_dp *intel_dp)
|
|
|
|
|
{
|
|
|
|
|
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
|
|
|
|
|
struct intel_connector *connector = intel_dp->attached_connector;
|
|
|
|
|
|
2023-04-27 12:56:01 +00:00
|
|
|
|
intel_dp->dfp.ycbcr420_passthrough =
|
2020-09-04 11:53:54 +00:00
|
|
|
|
drm_dp_downstream_420_passthrough(intel_dp->dpcd,
|
|
|
|
|
intel_dp->downstream_ports);
|
2020-09-24 18:41:55 +00:00
|
|
|
|
/* on-board LSPCON always assumed to support 4:4:4->4:2:0 conversion */
|
2023-04-27 12:56:01 +00:00
|
|
|
|
intel_dp->dfp.ycbcr_444_to_420 =
|
2020-09-24 18:41:55 +00:00
|
|
|
|
dp_to_dig_port(intel_dp)->lspcon.active ||
|
2020-09-04 11:53:54 +00:00
|
|
|
|
drm_dp_downstream_444_to_420_conversion(intel_dp->dpcd,
|
|
|
|
|
intel_dp->downstream_ports);
|
2023-04-27 12:56:01 +00:00
|
|
|
|
intel_dp->dfp.rgb_to_ycbcr =
|
|
|
|
|
drm_dp_downstream_rgb_to_ycbcr_conversion(intel_dp->dpcd,
|
|
|
|
|
intel_dp->downstream_ports,
|
|
|
|
|
DP_DS_HDMI_BT709_RGB_YCBCR_CONV);
|
2020-09-04 11:53:54 +00:00
|
|
|
|
|
2023-04-27 12:56:01 +00:00
|
|
|
|
connector->base.ycbcr_420_allowed = intel_dp_can_ycbcr420(intel_dp);
|
2020-09-04 11:53:54 +00:00
|
|
|
|
|
|
|
|
|
drm_dbg_kms(&i915->drm,
|
2020-12-18 10:37:23 +00:00
|
|
|
|
"[CONNECTOR:%d:%s] RGB->YcbCr conversion? %s, YCbCr 4:2:0 allowed? %s, YCbCr 4:4:4->4:2:0 conversion? %s\n",
|
2020-09-04 11:53:54 +00:00
|
|
|
|
connector->base.base.id, connector->base.name,
|
2022-02-25 23:46:28 +00:00
|
|
|
|
str_yes_no(intel_dp->dfp.rgb_to_ycbcr),
|
|
|
|
|
str_yes_no(connector->base.ycbcr_420_allowed),
|
|
|
|
|
str_yes_no(intel_dp->dfp.ycbcr_444_to_420));
|
2020-09-04 11:53:54 +00:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static void
|
|
|
|
|
intel_dp_set_edid(struct intel_dp *intel_dp)
|
|
|
|
|
{
|
2022-03-03 23:32:22 +00:00
|
|
|
|
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
|
2020-09-04 11:53:54 +00:00
|
|
|
|
struct intel_connector *connector = intel_dp->attached_connector;
|
2023-01-25 11:10:49 +00:00
|
|
|
|
const struct drm_edid *drm_edid;
|
2022-03-03 23:32:22 +00:00
|
|
|
|
bool vrr_capable;
|
2020-09-04 11:53:54 +00:00
|
|
|
|
|
|
|
|
|
intel_dp_unset_edid(intel_dp);
|
2023-01-25 11:10:49 +00:00
|
|
|
|
drm_edid = intel_dp_get_edid(intel_dp);
|
|
|
|
|
connector->detect_edid = drm_edid;
|
|
|
|
|
|
|
|
|
|
/* Below we depend on display info having been updated */
|
|
|
|
|
drm_edid_connector_update(&connector->base, drm_edid);
|
2020-09-04 11:53:54 +00:00
|
|
|
|
|
2022-05-10 10:42:28 +00:00
|
|
|
|
vrr_capable = intel_vrr_is_capable(connector);
|
2022-03-03 23:32:22 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm, "[CONNECTOR:%d:%s] VRR capable: %s\n",
|
|
|
|
|
connector->base.base.id, connector->base.name, str_yes_no(vrr_capable));
|
|
|
|
|
drm_connector_set_vrr_capable_property(&connector->base, vrr_capable);
|
|
|
|
|
|
2023-01-25 11:10:49 +00:00
|
|
|
|
intel_dp_update_dfp(intel_dp, drm_edid);
|
2020-09-04 11:53:54 +00:00
|
|
|
|
intel_dp_update_420(intel_dp);
|
2014-09-02 19:04:00 +00:00
|
|
|
|
|
2023-08-24 13:46:06 +00:00
|
|
|
|
drm_dp_cec_attach(&intel_dp->aux,
|
|
|
|
|
connector->base.display_info.source_physical_address);
|
2011-09-28 23:38:44 +00:00
|
|
|
|
}
|
|
|
|
|
|
2014-09-02 19:04:00 +00:00
|
|
|
|
static void
|
|
|
|
|
intel_dp_unset_edid(struct intel_dp *intel_dp)
|
2011-09-28 23:38:44 +00:00
|
|
|
|
{
|
2020-09-04 11:53:54 +00:00
|
|
|
|
struct intel_connector *connector = intel_dp->attached_connector;
|
2011-09-28 23:38:44 +00:00
|
|
|
|
|
2018-07-11 13:29:09 +00:00
|
|
|
|
drm_dp_cec_unset_edid(&intel_dp->aux);
|
2023-01-25 11:10:49 +00:00
|
|
|
|
drm_edid_free(connector->detect_edid);
|
2020-09-04 11:53:54 +00:00
|
|
|
|
connector->detect_edid = NULL;
|
2012-10-19 11:51:52 +00:00
|
|
|
|
|
2020-09-04 11:53:41 +00:00
|
|
|
|
intel_dp->dfp.max_bpc = 0;
|
2020-09-04 11:53:45 +00:00
|
|
|
|
intel_dp->dfp.max_dotclock = 0;
|
2020-09-04 11:53:47 +00:00
|
|
|
|
intel_dp->dfp.min_tmds_clock = 0;
|
|
|
|
|
intel_dp->dfp.max_tmds_clock = 0;
|
2020-09-04 11:53:54 +00:00
|
|
|
|
|
2020-12-18 10:37:16 +00:00
|
|
|
|
intel_dp->dfp.pcon_max_frl_bw = 0;
|
|
|
|
|
|
2020-09-04 11:53:54 +00:00
|
|
|
|
intel_dp->dfp.ycbcr_444_to_420 = false;
|
|
|
|
|
connector->base.ycbcr_420_allowed = false;
|
2022-03-03 23:32:22 +00:00
|
|
|
|
|
|
|
|
|
drm_connector_set_vrr_capable_property(&connector->base,
|
|
|
|
|
false);
|
2014-09-02 19:04:00 +00:00
|
|
|
|
}
|
2012-06-14 19:28:33 +00:00
|
|
|
|
|
2023-10-11 17:16:04 +00:00
|
|
|
|
static void
|
2023-10-11 17:16:05 +00:00
|
|
|
|
intel_dp_detect_dsc_caps(struct intel_dp *intel_dp, struct intel_connector *connector)
|
2023-10-11 17:16:04 +00:00
|
|
|
|
{
|
|
|
|
|
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
|
|
|
|
|
|
|
|
|
|
/* Read DP Sink DSC Cap DPCD regs for DP v1.4 */
|
|
|
|
|
if (!HAS_DSC(i915))
|
|
|
|
|
return;
|
|
|
|
|
|
|
|
|
|
if (intel_dp_is_edp(intel_dp))
|
|
|
|
|
intel_edp_get_dsc_sink_cap(intel_dp->edp_dpcd[0],
|
2023-10-11 17:16:06 +00:00
|
|
|
|
connector);
|
2023-10-11 17:16:04 +00:00
|
|
|
|
else
|
|
|
|
|
intel_dp_get_dsc_sink_cap(intel_dp->dpcd[DP_DPCD_REV],
|
2023-10-11 17:16:06 +00:00
|
|
|
|
connector);
|
2023-10-11 17:16:04 +00:00
|
|
|
|
}
|
|
|
|
|
|
2017-04-06 18:55:20 +00:00
|
|
|
|
static int
|
2018-09-27 20:57:34 +00:00
|
|
|
|
intel_dp_detect(struct drm_connector *connector,
|
|
|
|
|
struct drm_modeset_acquire_ctx *ctx,
|
|
|
|
|
bool force)
|
2010-09-19 05:09:06 +00:00
|
|
|
|
{
|
2018-09-27 20:57:34 +00:00
|
|
|
|
struct drm_i915_private *dev_priv = to_i915(connector->dev);
|
2023-10-11 17:16:05 +00:00
|
|
|
|
struct intel_connector *intel_connector =
|
|
|
|
|
to_intel_connector(connector);
|
|
|
|
|
struct intel_dp *intel_dp = intel_attached_dp(intel_connector);
|
2018-11-01 14:04:23 +00:00
|
|
|
|
struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
|
|
|
|
|
struct intel_encoder *encoder = &dig_port->base;
|
2010-09-19 05:09:06 +00:00
|
|
|
|
enum drm_connector_status status;
|
|
|
|
|
|
drm/i915/dp: conversion to struct drm_device logging macros.
This converts various instances of printk based logging macros in
i915/display/intel_dp.c with the new struct drm_device based logging
macros using the following coccinelle script:
@rule1@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@rule2@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
New checkpatch warnings were fixed manually.
v2: fix merge conflict with new changes in file.
Signed-off-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200122110844.2022-5-wambui.karugax@gmail.com
2020-01-22 11:08:42 +00:00
|
|
|
|
drm_dbg_kms(&dev_priv->drm, "[CONNECTOR:%d:%s]\n",
|
|
|
|
|
connector->base.id, connector->name);
|
drm/i915/display/dp: Make WARN* drm specific where drm_device ptr is available
drm specific WARN* calls include device information in the
backtrace, so we know what device the warnings originate from.
Covert all the calls of WARN* with device specific drm_WARN*
variants in functions where drm_device or drm_i915_private struct
pointer is readily available.
The conversion was done automatically with below coccinelle semantic
patch. checkpatch errors/warnings are fixed manually.
@rule1@
identifier func, T;
@@
func(...) {
...
struct drm_device *T = ...;
<...
(
-WARN(
+drm_WARN(T,
...)
|
-WARN_ON(
+drm_WARN_ON(T,
...)
|
-WARN_ONCE(
+drm_WARN_ONCE(T,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(T,
...)
)
...>
}
@rule2@
identifier func, T;
@@
func(struct drm_device *T,...) {
<...
(
-WARN(
+drm_WARN(T,
...)
|
-WARN_ON(
+drm_WARN_ON(T,
...)
|
-WARN_ONCE(
+drm_WARN_ONCE(T,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(T,
...)
)
...>
}
@rule3@
identifier func, T;
@@
func(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-WARN(
+drm_WARN(&T->drm,
...)
|
-WARN_ON(
+drm_WARN_ON(&T->drm,
...)
|
-WARN_ONCE(
+drm_WARN_ONCE(&T->drm,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(&T->drm,
...)
)
...+>
}
@rule4@
identifier func, T;
@@
func(struct drm_i915_private *T,...) {
<+...
(
-WARN(
+drm_WARN(&T->drm,
...)
|
-WARN_ON(
+drm_WARN_ON(&T->drm,
...)
|
-WARN_ONCE(
+drm_WARN_ONCE(&T->drm,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(&T->drm,
...)
)
...+>
}
Signed-off-by: Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200220165507.16823-6-pankaj.laxminarayan.bharadiya@intel.com
2020-02-20 16:55:04 +00:00
|
|
|
|
drm_WARN_ON(&dev_priv->drm,
|
|
|
|
|
!drm_modeset_is_locked(&dev_priv->drm.mode_config.connection_mutex));
|
2017-04-06 18:55:20 +00:00
|
|
|
|
|
2023-10-03 12:42:07 +00:00
|
|
|
|
if (!intel_display_device_enabled(dev_priv))
|
2020-09-10 16:42:56 +00:00
|
|
|
|
return connector_status_disconnected;
|
|
|
|
|
|
2018-07-17 17:42:15 +00:00
|
|
|
|
/* Can't disconnect eDP */
|
2017-08-18 09:30:20 +00:00
|
|
|
|
if (intel_dp_is_edp(intel_dp))
|
2014-09-02 19:03:59 +00:00
|
|
|
|
status = edp_detect(intel_dp);
|
2018-09-27 20:57:32 +00:00
|
|
|
|
else if (intel_digital_port_connected(encoder))
|
2015-11-18 15:19:30 +00:00
|
|
|
|
status = intel_dp_detect_dpcd(intel_dp);
|
2010-09-19 05:09:06 +00:00
|
|
|
|
else
|
2015-11-18 15:19:30 +00:00
|
|
|
|
status = connector_status_disconnected;
|
|
|
|
|
|
2016-10-03 07:55:16 +00:00
|
|
|
|
if (status == connector_status_disconnected) {
|
2016-12-10 00:22:50 +00:00
|
|
|
|
memset(&intel_dp->compliance, 0, sizeof(intel_dp->compliance));
|
2023-10-11 17:16:05 +00:00
|
|
|
|
memset(intel_connector->dp.dsc_dpcd, 0, sizeof(intel_connector->dp.dsc_dpcd));
|
2023-11-08 07:23:00 +00:00
|
|
|
|
intel_dp->psr.sink_panel_replay_support = false;
|
2015-10-28 10:00:36 +00:00
|
|
|
|
|
2016-04-11 17:11:24 +00:00
|
|
|
|
if (intel_dp->is_mst) {
|
drm/i915/dp: conversion to struct drm_device logging macros.
This converts various instances of printk based logging macros in
i915/display/intel_dp.c with the new struct drm_device based logging
macros using the following coccinelle script:
@rule1@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@rule2@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
New checkpatch warnings were fixed manually.
v2: fix merge conflict with new changes in file.
Signed-off-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200122110844.2022-5-wambui.karugax@gmail.com
2020-01-22 11:08:42 +00:00
|
|
|
|
drm_dbg_kms(&dev_priv->drm,
|
|
|
|
|
"MST device may have disappeared %d vs %d\n",
|
|
|
|
|
intel_dp->is_mst,
|
|
|
|
|
intel_dp->mst_mgr.mst_state);
|
2016-04-11 17:11:24 +00:00
|
|
|
|
intel_dp->is_mst = false;
|
|
|
|
|
drm_dp_mst_topology_mgr_set_mst(&intel_dp->mst_mgr,
|
|
|
|
|
intel_dp->is_mst);
|
|
|
|
|
}
|
|
|
|
|
|
2013-11-27 20:21:54 +00:00
|
|
|
|
goto out;
|
2015-10-28 10:00:36 +00:00
|
|
|
|
}
|
2010-09-19 05:09:06 +00:00
|
|
|
|
|
2024-02-29 04:37:16 +00:00
|
|
|
|
if (!intel_dp_is_edp(intel_dp))
|
|
|
|
|
intel_psr_init_dpcd(intel_dp);
|
|
|
|
|
|
2023-10-11 17:16:05 +00:00
|
|
|
|
intel_dp_detect_dsc_caps(intel_dp, intel_connector);
|
2020-06-16 21:11:45 +00:00
|
|
|
|
|
|
|
|
|
intel_dp_configure_mst(intel_dp);
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
|
* TODO: Reset link params when switching to MST mode, until MST
|
|
|
|
|
* supports link training fallback params.
|
|
|
|
|
*/
|
|
|
|
|
if (intel_dp->reset_link_params || intel_dp->is_mst) {
|
2021-10-18 09:41:51 +00:00
|
|
|
|
intel_dp_reset_max_link_params(intel_dp);
|
2017-02-08 00:54:11 +00:00
|
|
|
|
intel_dp->reset_link_params = false;
|
|
|
|
|
}
|
2016-12-06 00:27:36 +00:00
|
|
|
|
|
2016-07-29 13:52:39 +00:00
|
|
|
|
intel_dp_print_rates(intel_dp);
|
|
|
|
|
|
2016-07-29 13:51:16 +00:00
|
|
|
|
if (intel_dp->is_mst) {
|
2016-03-30 12:35:22 +00:00
|
|
|
|
/*
|
|
|
|
|
* If we are in MST mode then this connector
|
|
|
|
|
* won't appear connected or have anything
|
|
|
|
|
* with EDID on it
|
|
|
|
|
*/
|
2014-05-02 04:02:48 +00:00
|
|
|
|
status = connector_status_disconnected;
|
|
|
|
|
goto out;
|
2018-09-27 20:57:31 +00:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
|
* Some external monitors do not signal loss of link synchronization
|
|
|
|
|
* with an IRQ_HPD, so force a link status check.
|
|
|
|
|
*/
|
2018-09-27 20:57:33 +00:00
|
|
|
|
if (!intel_dp_is_edp(intel_dp)) {
|
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
|
|
ret = intel_dp_retrain_link(encoder, ctx);
|
2019-05-09 17:34:42 +00:00
|
|
|
|
if (ret)
|
2018-09-27 20:57:33 +00:00
|
|
|
|
return ret;
|
|
|
|
|
}
|
2014-05-02 04:02:48 +00:00
|
|
|
|
|
2015-10-28 10:00:36 +00:00
|
|
|
|
/*
|
|
|
|
|
* Clearing NACK and defer counts to get their exact values
|
|
|
|
|
* while reading EDID which are required by Compliance tests
|
|
|
|
|
* 4.2.2.4 and 4.2.2.5
|
|
|
|
|
*/
|
|
|
|
|
intel_dp->aux.i2c_nack_count = 0;
|
|
|
|
|
intel_dp->aux.i2c_defer_count = 0;
|
|
|
|
|
|
2014-09-02 19:04:00 +00:00
|
|
|
|
intel_dp_set_edid(intel_dp);
|
2018-09-27 20:57:34 +00:00
|
|
|
|
if (intel_dp_is_edp(intel_dp) ||
|
|
|
|
|
to_intel_connector(connector)->detect_edid)
|
2016-10-03 07:55:16 +00:00
|
|
|
|
status = connector_status_connected;
|
2013-11-27 20:21:54 +00:00
|
|
|
|
|
2020-12-18 10:37:19 +00:00
|
|
|
|
intel_dp_check_device_service_irq(intel_dp);
|
drm/i915: Move Displayport test request and sink IRQ logic to intel_dp_detect()
Due to changes in the driver and to support Displayport compliance testing,
the test request and sink IRQ logic has been relocated from
intel_dp_check_link_status to intel_dp_detect. This is because the bulk of the
compliance tests that set the TEST_REQUEST bit in the DEVICE_IRQ field of the
DPCD issue a long pulse / hot plug event to signify the start of the test.
Currently, for a long pulse, intel_dp_check_link_status is not called for a
long HPD pulse, so if test requests come in, they cannot be detected by the
driver.
Once located in the intel_dp_detect, in the regular hot plug event path,
proper detection of Displayport compliance test requests occurs which then
invokes the test handler to support them. Additionally, this places compliance
testing in the normal operational paths, eliminating as much special case code
as possible.
The only change in intel_dp_check_link_status with this patch is that when
the IRQ is the result of a test request from the sink, the test handler is not
invoked during the short pulse path. Short pulse test requests are for a
particular variety of tests (mainly link training) that will be implemented
in the future. Once those tests are available, the test request handler will
be called from here as well.
V2:
- Rewored the commit message to be more clear about the content and intent
of this patch
- Restore IRQ detection logic to intel_dp_check_link_status(). Continue to
detect and clear sink IRQs in the short pulse case. Ignore test requests
in the short pulses for now since they are for future test implementations.
Signed-off-by: Todd Previte <tprevite@gmail.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-04-20 22:27:34 +00:00
|
|
|
|
|
2013-11-27 20:21:54 +00:00
|
|
|
|
out:
|
2016-10-03 07:55:16 +00:00
|
|
|
|
if (status != connector_status_connected && !intel_dp->is_mst)
|
2016-03-30 12:35:22 +00:00
|
|
|
|
intel_dp_unset_edid(intel_dp);
|
2016-03-30 12:35:23 +00:00
|
|
|
|
|
2020-04-24 12:50:52 +00:00
|
|
|
|
if (!intel_dp_is_edp(intel_dp))
|
|
|
|
|
drm_dp_set_subconnector_property(connector,
|
|
|
|
|
status,
|
|
|
|
|
intel_dp->dpcd,
|
|
|
|
|
intel_dp->downstream_ports);
|
2016-10-03 07:55:16 +00:00
|
|
|
|
return status;
|
2016-03-30 12:35:22 +00:00
|
|
|
|
}
|
|
|
|
|
|
2014-09-02 19:04:00 +00:00
|
|
|
|
static void
|
|
|
|
|
intel_dp_force(struct drm_connector *connector)
|
2009-04-07 23:16:42 +00:00
|
|
|
|
{
|
2019-12-04 18:05:42 +00:00
|
|
|
|
struct intel_dp *intel_dp = intel_attached_dp(to_intel_connector(connector));
|
2018-11-01 14:04:23 +00:00
|
|
|
|
struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
|
|
|
|
|
struct intel_encoder *intel_encoder = &dig_port->base;
|
2015-11-16 14:01:04 +00:00
|
|
|
|
struct drm_i915_private *dev_priv = to_i915(intel_encoder->base.dev);
|
2009-04-07 23:16:42 +00:00
|
|
|
|
|
drm/i915/dp: conversion to struct drm_device logging macros.
This converts various instances of printk based logging macros in
i915/display/intel_dp.c with the new struct drm_device based logging
macros using the following coccinelle script:
@rule1@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@rule2@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
New checkpatch warnings were fixed manually.
v2: fix merge conflict with new changes in file.
Signed-off-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200122110844.2022-5-wambui.karugax@gmail.com
2020-01-22 11:08:42 +00:00
|
|
|
|
drm_dbg_kms(&dev_priv->drm, "[CONNECTOR:%d:%s]\n",
|
|
|
|
|
connector->base.id, connector->name);
|
2014-09-02 19:04:00 +00:00
|
|
|
|
intel_dp_unset_edid(intel_dp);
|
2009-04-07 23:16:42 +00:00
|
|
|
|
|
2014-09-02 19:04:00 +00:00
|
|
|
|
if (connector->status != connector_status_connected)
|
|
|
|
|
return;
|
2014-03-05 14:20:53 +00:00
|
|
|
|
|
2014-09-02 19:04:00 +00:00
|
|
|
|
intel_dp_set_edid(intel_dp);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static int intel_dp_get_modes(struct drm_connector *connector)
|
|
|
|
|
{
|
|
|
|
|
struct intel_connector *intel_connector = to_intel_connector(connector);
|
2023-01-25 11:10:49 +00:00
|
|
|
|
int num_modes;
|
2014-09-02 19:04:00 +00:00
|
|
|
|
|
2023-01-25 11:10:49 +00:00
|
|
|
|
/* drm_edid_connector_update() done in ->detect() or ->force() */
|
|
|
|
|
num_modes = drm_edid_connector_add_modes(connector);
|
2022-02-23 08:18:10 +00:00
|
|
|
|
|
2021-02-11 14:52:13 +00:00
|
|
|
|
/* Also add fixed mode, which may or may not be present in EDID */
|
2022-03-11 17:24:19 +00:00
|
|
|
|
if (intel_dp_is_edp(intel_attached_dp(intel_connector)))
|
|
|
|
|
num_modes += intel_panel_get_modes(intel_connector);
|
2014-09-02 19:04:00 +00:00
|
|
|
|
|
2021-02-11 14:52:13 +00:00
|
|
|
|
if (num_modes)
|
|
|
|
|
return num_modes;
|
|
|
|
|
|
2023-01-25 11:10:49 +00:00
|
|
|
|
if (!intel_connector->detect_edid) {
|
2020-09-04 11:53:50 +00:00
|
|
|
|
struct intel_dp *intel_dp = intel_attached_dp(intel_connector);
|
|
|
|
|
struct drm_display_mode *mode;
|
|
|
|
|
|
|
|
|
|
mode = drm_dp_downstream_mode(connector->dev,
|
|
|
|
|
intel_dp->dpcd,
|
|
|
|
|
intel_dp->downstream_ports);
|
|
|
|
|
if (mode) {
|
|
|
|
|
drm_mode_probed_add(connector, mode);
|
2021-02-11 14:52:13 +00:00
|
|
|
|
num_modes++;
|
2020-09-04 11:53:50 +00:00
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2021-02-11 14:52:13 +00:00
|
|
|
|
return num_modes;
|
2009-04-07 23:16:42 +00:00
|
|
|
|
}
|
|
|
|
|
|
2016-06-24 13:00:14 +00:00
|
|
|
|
static int
|
|
|
|
|
intel_dp_connector_register(struct drm_connector *connector)
|
|
|
|
|
{
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
struct drm_i915_private *i915 = to_i915(connector->dev);
|
2019-12-04 18:05:42 +00:00
|
|
|
|
struct intel_dp *intel_dp = intel_attached_dp(to_intel_connector(connector));
|
2020-11-30 20:47:26 +00:00
|
|
|
|
struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
|
|
|
|
|
struct intel_lspcon *lspcon = &dig_port->lspcon;
|
2016-06-24 13:00:15 +00:00
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
|
|
ret = intel_connector_register(connector);
|
|
|
|
|
if (ret)
|
|
|
|
|
return ret;
|
2016-06-24 13:00:14 +00:00
|
|
|
|
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm, "registering %s bus for %s\n",
|
|
|
|
|
intel_dp->aux.name, connector->kdev->kobj.name);
|
2016-06-24 13:00:14 +00:00
|
|
|
|
|
|
|
|
|
intel_dp->aux.dev = connector->kdev;
|
2018-07-11 13:29:09 +00:00
|
|
|
|
ret = drm_dp_aux_register(&intel_dp->aux);
|
|
|
|
|
if (!ret)
|
2019-08-14 10:44:59 +00:00
|
|
|
|
drm_dp_cec_register_connector(&intel_dp->aux, connector);
|
2020-11-30 20:47:26 +00:00
|
|
|
|
|
2023-02-08 01:55:01 +00:00
|
|
|
|
if (!intel_bios_encoder_is_lspcon(dig_port->base.devdata))
|
2020-11-30 20:47:26 +00:00
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
|
* ToDo: Clean this up to handle lspcon init and resume more
|
|
|
|
|
* efficiently and streamlined.
|
|
|
|
|
*/
|
|
|
|
|
if (lspcon_init(dig_port)) {
|
|
|
|
|
lspcon_detect_hdr_capability(lspcon);
|
|
|
|
|
if (lspcon->hdr_supported)
|
2022-03-21 19:50:05 +00:00
|
|
|
|
drm_connector_attach_hdr_output_metadata_property(connector);
|
2020-11-30 20:47:26 +00:00
|
|
|
|
}
|
|
|
|
|
|
2018-07-11 13:29:09 +00:00
|
|
|
|
return ret;
|
2016-06-24 13:00:14 +00:00
|
|
|
|
}
|
|
|
|
|
|
2016-06-17 10:40:33 +00:00
|
|
|
|
static void
|
|
|
|
|
intel_dp_connector_unregister(struct drm_connector *connector)
|
|
|
|
|
{
|
2019-12-04 18:05:42 +00:00
|
|
|
|
struct intel_dp *intel_dp = intel_attached_dp(to_intel_connector(connector));
|
2018-07-11 13:29:09 +00:00
|
|
|
|
|
|
|
|
|
drm_dp_cec_unregister_connector(&intel_dp->aux);
|
|
|
|
|
drm_dp_aux_unregister(&intel_dp->aux);
|
2016-06-17 10:40:33 +00:00
|
|
|
|
intel_connector_unregister(connector);
|
|
|
|
|
}
|
|
|
|
|
|
2024-02-05 13:26:31 +00:00
|
|
|
|
void intel_dp_connector_sync_state(struct intel_connector *connector,
|
|
|
|
|
const struct intel_crtc_state *crtc_state)
|
|
|
|
|
{
|
|
|
|
|
struct drm_i915_private *i915 = to_i915(connector->base.dev);
|
|
|
|
|
|
|
|
|
|
if (crtc_state && crtc_state->dsc.compression_enable) {
|
|
|
|
|
drm_WARN_ON(&i915->drm, !connector->dp.dsc_decompression_aux);
|
|
|
|
|
connector->dp.dsc_decompression_enabled = true;
|
|
|
|
|
} else {
|
|
|
|
|
connector->dp.dsc_decompression_enabled = false;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2018-12-14 18:27:02 +00:00
|
|
|
|
void intel_dp_encoder_flush_work(struct drm_encoder *encoder)
|
2010-08-20 16:08:28 +00:00
|
|
|
|
{
|
2020-07-01 04:50:54 +00:00
|
|
|
|
struct intel_digital_port *dig_port = enc_to_dig_port(to_intel_encoder(encoder));
|
|
|
|
|
struct intel_dp *intel_dp = &dig_port->dp;
|
2010-08-20 16:08:28 +00:00
|
|
|
|
|
2020-07-01 04:50:54 +00:00
|
|
|
|
intel_dp_mst_encoder_cleanup(dig_port);
|
2019-01-14 14:21:26 +00:00
|
|
|
|
|
2021-01-08 17:44:14 +00:00
|
|
|
|
intel_pps_vdd_off_sync(intel_dp);
|
2016-06-20 08:29:17 +00:00
|
|
|
|
|
drm/i915/edp: wait power off delay at driver remove to optimize probe
Panel power off delay is the time the panel power needs to remain off
after being switched off, before it can be switched on again.
For the purpose of respecting panel power off delay at driver probe,
assuming the panel was last switched off at driver probe is overly
pessimistic. If the panel was never on, we'd end up waiting for no
reason.
We don't know what has happened before kernel boot, but we can make some
assumptions:
- The panel may have been switched off right before kernel boot by some
pre-os environment.
- After kernel boot, the panel may only be switched off by i915.
- At i915 driver probe, only a previously loaded and removed i915 may
have switched the panel power off.
With these assumptions, we can initialize the last power off time to
kernel boot time, if we also ensure i915 driver remove waits for the
panel power off delay after switching panel power off.
This shaves off the time it takes from kernel boot to i915 probe from
the first panel enable, if (and only if) the panel was not already
enabled at boot.
The encoder destroy hook is pretty much the last place where we can
wait, right after we've ensured the panel power has been switched off,
and before the whole encoder is destroyed.
Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/7417
Cc: Lee Shawn C <shawn.c.lee@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Tested-by: Lee Shawn C <shawn.c.lee@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20221116150657.1347504-1-jani.nikula@intel.com
2022-11-16 15:06:57 +00:00
|
|
|
|
/*
|
|
|
|
|
* Ensure power off delay is respected on module remove, so that we can
|
|
|
|
|
* reduce delays at driver probe. See pps_init_timestamps().
|
|
|
|
|
*/
|
|
|
|
|
intel_pps_wait_power_cycle(intel_dp);
|
|
|
|
|
|
2016-06-20 08:29:17 +00:00
|
|
|
|
intel_dp_aux_fini(intel_dp);
|
2018-12-14 18:27:02 +00:00
|
|
|
|
}
|
|
|
|
|
|
2016-04-18 07:04:21 +00:00
|
|
|
|
void intel_dp_encoder_suspend(struct intel_encoder *intel_encoder)
|
2014-08-18 11:42:45 +00:00
|
|
|
|
{
|
2019-12-04 18:05:43 +00:00
|
|
|
|
struct intel_dp *intel_dp = enc_to_intel_dp(intel_encoder);
|
2014-08-18 11:42:45 +00:00
|
|
|
|
|
2021-01-08 17:44:14 +00:00
|
|
|
|
intel_pps_vdd_off_sync(intel_dp);
|
2014-08-18 11:42:45 +00:00
|
|
|
|
}
|
|
|
|
|
|
2020-10-01 15:16:38 +00:00
|
|
|
|
void intel_dp_encoder_shutdown(struct intel_encoder *intel_encoder)
|
2020-10-01 15:16:37 +00:00
|
|
|
|
{
|
|
|
|
|
struct intel_dp *intel_dp = enc_to_intel_dp(intel_encoder);
|
|
|
|
|
|
2021-01-08 17:44:19 +00:00
|
|
|
|
intel_pps_wait_power_cycle(intel_dp);
|
2020-10-01 15:16:37 +00:00
|
|
|
|
}
|
|
|
|
|
|
2020-02-14 11:41:26 +00:00
|
|
|
|
static int intel_modeset_tile_group(struct intel_atomic_state *state,
|
|
|
|
|
int tile_group_id)
|
|
|
|
|
{
|
|
|
|
|
struct drm_i915_private *dev_priv = to_i915(state->base.dev);
|
|
|
|
|
struct drm_connector_list_iter conn_iter;
|
|
|
|
|
struct drm_connector *connector;
|
|
|
|
|
int ret = 0;
|
|
|
|
|
|
|
|
|
|
drm_connector_list_iter_begin(&dev_priv->drm, &conn_iter);
|
|
|
|
|
drm_for_each_connector_iter(connector, &conn_iter) {
|
|
|
|
|
struct drm_connector_state *conn_state;
|
|
|
|
|
struct intel_crtc_state *crtc_state;
|
|
|
|
|
struct intel_crtc *crtc;
|
|
|
|
|
|
|
|
|
|
if (!connector->has_tile ||
|
|
|
|
|
connector->tile_group->id != tile_group_id)
|
|
|
|
|
continue;
|
|
|
|
|
|
|
|
|
|
conn_state = drm_atomic_get_connector_state(&state->base,
|
|
|
|
|
connector);
|
|
|
|
|
if (IS_ERR(conn_state)) {
|
|
|
|
|
ret = PTR_ERR(conn_state);
|
|
|
|
|
break;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
crtc = to_intel_crtc(conn_state->crtc);
|
|
|
|
|
|
|
|
|
|
if (!crtc)
|
|
|
|
|
continue;
|
|
|
|
|
|
|
|
|
|
crtc_state = intel_atomic_get_new_crtc_state(state, crtc);
|
|
|
|
|
crtc_state->uapi.mode_changed = true;
|
|
|
|
|
|
|
|
|
|
ret = drm_atomic_add_affected_planes(&state->base, &crtc->base);
|
|
|
|
|
if (ret)
|
|
|
|
|
break;
|
|
|
|
|
}
|
2020-02-21 15:43:10 +00:00
|
|
|
|
drm_connector_list_iter_end(&conn_iter);
|
2020-02-14 11:41:26 +00:00
|
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static int intel_modeset_affected_transcoders(struct intel_atomic_state *state, u8 transcoders)
|
|
|
|
|
{
|
|
|
|
|
struct drm_i915_private *dev_priv = to_i915(state->base.dev);
|
|
|
|
|
struct intel_crtc *crtc;
|
|
|
|
|
|
|
|
|
|
if (transcoders == 0)
|
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
|
|
for_each_intel_crtc(&dev_priv->drm, crtc) {
|
|
|
|
|
struct intel_crtc_state *crtc_state;
|
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
|
|
crtc_state = intel_atomic_get_crtc_state(&state->base, crtc);
|
|
|
|
|
if (IS_ERR(crtc_state))
|
|
|
|
|
return PTR_ERR(crtc_state);
|
|
|
|
|
|
|
|
|
|
if (!crtc_state->hw.enable)
|
|
|
|
|
continue;
|
|
|
|
|
|
|
|
|
|
if (!(transcoders & BIT(crtc_state->cpu_transcoder)))
|
|
|
|
|
continue;
|
|
|
|
|
|
|
|
|
|
crtc_state->uapi.mode_changed = true;
|
|
|
|
|
|
|
|
|
|
ret = drm_atomic_add_affected_connectors(&state->base, &crtc->base);
|
|
|
|
|
if (ret)
|
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
|
|
ret = drm_atomic_add_affected_planes(&state->base, &crtc->base);
|
|
|
|
|
if (ret)
|
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
|
|
transcoders &= ~BIT(crtc_state->cpu_transcoder);
|
|
|
|
|
}
|
|
|
|
|
|
drm/i915/display/dp: Make WARN* drm specific where drm_device ptr is available
drm specific WARN* calls include device information in the
backtrace, so we know what device the warnings originate from.
Covert all the calls of WARN* with device specific drm_WARN*
variants in functions where drm_device or drm_i915_private struct
pointer is readily available.
The conversion was done automatically with below coccinelle semantic
patch. checkpatch errors/warnings are fixed manually.
@rule1@
identifier func, T;
@@
func(...) {
...
struct drm_device *T = ...;
<...
(
-WARN(
+drm_WARN(T,
...)
|
-WARN_ON(
+drm_WARN_ON(T,
...)
|
-WARN_ONCE(
+drm_WARN_ONCE(T,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(T,
...)
)
...>
}
@rule2@
identifier func, T;
@@
func(struct drm_device *T,...) {
<...
(
-WARN(
+drm_WARN(T,
...)
|
-WARN_ON(
+drm_WARN_ON(T,
...)
|
-WARN_ONCE(
+drm_WARN_ONCE(T,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(T,
...)
)
...>
}
@rule3@
identifier func, T;
@@
func(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-WARN(
+drm_WARN(&T->drm,
...)
|
-WARN_ON(
+drm_WARN_ON(&T->drm,
...)
|
-WARN_ONCE(
+drm_WARN_ONCE(&T->drm,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(&T->drm,
...)
)
...+>
}
@rule4@
identifier func, T;
@@
func(struct drm_i915_private *T,...) {
<+...
(
-WARN(
+drm_WARN(&T->drm,
...)
|
-WARN_ON(
+drm_WARN_ON(&T->drm,
...)
|
-WARN_ONCE(
+drm_WARN_ONCE(&T->drm,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(&T->drm,
...)
)
...+>
}
Signed-off-by: Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200220165507.16823-6-pankaj.laxminarayan.bharadiya@intel.com
2020-02-20 16:55:04 +00:00
|
|
|
|
drm_WARN_ON(&dev_priv->drm, transcoders != 0);
|
2020-02-14 11:41:26 +00:00
|
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static int intel_modeset_synced_crtcs(struct intel_atomic_state *state,
|
|
|
|
|
struct drm_connector *connector)
|
|
|
|
|
{
|
|
|
|
|
const struct drm_connector_state *old_conn_state =
|
|
|
|
|
drm_atomic_get_old_connector_state(&state->base, connector);
|
|
|
|
|
const struct intel_crtc_state *old_crtc_state;
|
|
|
|
|
struct intel_crtc *crtc;
|
|
|
|
|
u8 transcoders;
|
|
|
|
|
|
|
|
|
|
crtc = to_intel_crtc(old_conn_state->crtc);
|
|
|
|
|
if (!crtc)
|
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
|
|
old_crtc_state = intel_atomic_get_old_crtc_state(state, crtc);
|
|
|
|
|
|
|
|
|
|
if (!old_crtc_state->hw.active)
|
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
|
|
transcoders = old_crtc_state->sync_mode_slaves_mask;
|
|
|
|
|
if (old_crtc_state->master_transcoder != INVALID_TRANSCODER)
|
|
|
|
|
transcoders |= BIT(old_crtc_state->master_transcoder);
|
|
|
|
|
|
|
|
|
|
return intel_modeset_affected_transcoders(state,
|
|
|
|
|
transcoders);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static int intel_dp_connector_atomic_check(struct drm_connector *conn,
|
|
|
|
|
struct drm_atomic_state *_state)
|
|
|
|
|
{
|
|
|
|
|
struct drm_i915_private *dev_priv = to_i915(conn->dev);
|
|
|
|
|
struct intel_atomic_state *state = to_intel_atomic_state(_state);
|
2022-08-17 19:38:42 +00:00
|
|
|
|
struct drm_connector_state *conn_state = drm_atomic_get_new_connector_state(_state, conn);
|
|
|
|
|
struct intel_connector *intel_conn = to_intel_connector(conn);
|
|
|
|
|
struct intel_dp *intel_dp = enc_to_intel_dp(intel_conn->encoder);
|
2020-02-14 11:41:26 +00:00
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
|
|
ret = intel_digital_connector_atomic_check(conn, &state->base);
|
|
|
|
|
if (ret)
|
|
|
|
|
return ret;
|
|
|
|
|
|
2022-08-17 19:38:42 +00:00
|
|
|
|
if (intel_dp_mst_source_support(intel_dp)) {
|
|
|
|
|
ret = drm_dp_mst_root_conn_atomic_check(conn_state, &intel_dp->mst_mgr);
|
|
|
|
|
if (ret)
|
|
|
|
|
return ret;
|
|
|
|
|
}
|
|
|
|
|
|
2020-03-13 16:48:26 +00:00
|
|
|
|
/*
|
|
|
|
|
* We don't enable port sync on BDW due to missing w/as and
|
|
|
|
|
* due to not having adjusted the modeset sequence appropriately.
|
|
|
|
|
*/
|
2021-03-20 04:42:42 +00:00
|
|
|
|
if (DISPLAY_VER(dev_priv) < 9)
|
2020-02-14 11:41:26 +00:00
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
|
|
if (!intel_connector_needs_modeset(state, conn))
|
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
|
|
if (conn->has_tile) {
|
|
|
|
|
ret = intel_modeset_tile_group(state, conn->tile_group->id);
|
|
|
|
|
if (ret)
|
|
|
|
|
return ret;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
return intel_modeset_synced_crtcs(state, conn);
|
|
|
|
|
}
|
|
|
|
|
|
2023-10-09 17:40:46 +00:00
|
|
|
|
static void intel_dp_oob_hotplug_event(struct drm_connector *connector,
|
|
|
|
|
enum drm_connector_status hpd_state)
|
2021-08-17 21:51:59 +00:00
|
|
|
|
{
|
|
|
|
|
struct intel_encoder *encoder = intel_attached_encoder(to_intel_connector(connector));
|
|
|
|
|
struct drm_i915_private *i915 = to_i915(connector->dev);
|
2023-10-09 17:40:46 +00:00
|
|
|
|
bool hpd_high = hpd_state == connector_status_connected;
|
|
|
|
|
unsigned int hpd_pin = encoder->hpd_pin;
|
|
|
|
|
bool need_work = false;
|
2021-08-17 21:51:59 +00:00
|
|
|
|
|
|
|
|
|
spin_lock_irq(&i915->irq_lock);
|
2023-10-09 17:40:46 +00:00
|
|
|
|
if (hpd_high != test_bit(hpd_pin, &i915->display.hotplug.oob_hotplug_last_state)) {
|
|
|
|
|
i915->display.hotplug.event_bits |= BIT(hpd_pin);
|
|
|
|
|
|
|
|
|
|
__assign_bit(hpd_pin, &i915->display.hotplug.oob_hotplug_last_state, hpd_high);
|
|
|
|
|
need_work = true;
|
|
|
|
|
}
|
2021-08-17 21:51:59 +00:00
|
|
|
|
spin_unlock_irq(&i915->irq_lock);
|
2023-10-09 17:40:46 +00:00
|
|
|
|
|
|
|
|
|
if (need_work)
|
|
|
|
|
queue_delayed_work(i915->unordered_wq, &i915->display.hotplug.hotplug_work, 0);
|
2021-08-17 21:51:59 +00:00
|
|
|
|
}
|
|
|
|
|
|
2009-04-07 23:16:42 +00:00
|
|
|
|
static const struct drm_connector_funcs intel_dp_connector_funcs = {
|
2014-09-02 19:04:00 +00:00
|
|
|
|
.force = intel_dp_force,
|
2009-04-07 23:16:42 +00:00
|
|
|
|
.fill_modes = drm_helper_probe_single_connector_modes,
|
2017-05-01 13:38:01 +00:00
|
|
|
|
.atomic_get_property = intel_digital_connector_atomic_get_property,
|
|
|
|
|
.atomic_set_property = intel_digital_connector_atomic_set_property,
|
2016-06-24 13:00:14 +00:00
|
|
|
|
.late_register = intel_dp_connector_register,
|
2016-06-17 10:40:33 +00:00
|
|
|
|
.early_unregister = intel_dp_connector_unregister,
|
2018-10-09 14:11:03 +00:00
|
|
|
|
.destroy = intel_connector_destroy,
|
2015-01-23 00:50:32 +00:00
|
|
|
|
.atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
|
2017-05-01 13:38:01 +00:00
|
|
|
|
.atomic_duplicate_state = intel_digital_connector_duplicate_state,
|
2021-08-17 21:51:59 +00:00
|
|
|
|
.oob_hotplug_event = intel_dp_oob_hotplug_event,
|
2009-04-07 23:16:42 +00:00
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
static const struct drm_connector_helper_funcs intel_dp_connector_helper_funcs = {
|
2017-04-06 18:55:20 +00:00
|
|
|
|
.detect_ctx = intel_dp_detect,
|
2009-04-07 23:16:42 +00:00
|
|
|
|
.get_modes = intel_dp_get_modes,
|
|
|
|
|
.mode_valid = intel_dp_mode_valid,
|
2020-02-14 11:41:26 +00:00
|
|
|
|
.atomic_check = intel_dp_connector_atomic_check,
|
2009-04-07 23:16:42 +00:00
|
|
|
|
};
|
|
|
|
|
|
2015-01-23 05:00:31 +00:00
|
|
|
|
enum irqreturn
|
2020-07-01 04:50:54 +00:00
|
|
|
|
intel_dp_hpd_pulse(struct intel_digital_port *dig_port, bool long_hpd)
|
2014-06-18 01:29:35 +00:00
|
|
|
|
{
|
2020-07-01 04:50:54 +00:00
|
|
|
|
struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev);
|
|
|
|
|
struct intel_dp *intel_dp = &dig_port->dp;
|
2014-08-18 11:42:42 +00:00
|
|
|
|
|
2020-07-01 04:50:54 +00:00
|
|
|
|
if (dig_port->base.type == INTEL_OUTPUT_EDP &&
|
2022-03-11 18:51:48 +00:00
|
|
|
|
(long_hpd || !intel_pps_have_panel_power_or_vdd(intel_dp))) {
|
2014-10-16 17:46:10 +00:00
|
|
|
|
/*
|
2020-03-18 08:18:37 +00:00
|
|
|
|
* vdd off can generate a long/short pulse on eDP which
|
2014-10-16 17:46:10 +00:00
|
|
|
|
* would require vdd on to handle it, and thus we
|
|
|
|
|
* would end up in an endless cycle of
|
2020-03-18 08:18:37 +00:00
|
|
|
|
* "vdd off -> long/short hpd -> vdd on -> detect -> vdd off -> ..."
|
2014-10-16 17:46:10 +00:00
|
|
|
|
*/
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm,
|
|
|
|
|
"ignoring %s hpd on eDP [ENCODER:%d:%s]\n",
|
|
|
|
|
long_hpd ? "long" : "short",
|
2020-07-01 04:50:54 +00:00
|
|
|
|
dig_port->base.base.base.id,
|
|
|
|
|
dig_port->base.base.name);
|
2015-02-10 12:11:46 +00:00
|
|
|
|
return IRQ_HANDLED;
|
2014-10-16 17:46:10 +00:00
|
|
|
|
}
|
|
|
|
|
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
drm_dbg_kms(&i915->drm, "got hpd irq on [ENCODER:%d:%s] - %s\n",
|
2020-07-01 04:50:54 +00:00
|
|
|
|
dig_port->base.base.base.id,
|
|
|
|
|
dig_port->base.base.name,
|
drm/i915/dp: use struct drm_device based logging
Convert all the DRM_* logging macros to the struct drm_device based
macros to provide device specific logging.
No functional changes.
Generated using the following semantic patch, originally written by
Wambui Karuga <wambui.karugax@gmail.com>, with manual fixups on top:
@@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_NOTE(
+drm_notice(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
Cc: Wambui Karuga <wambui.karugax@gmail.com>
Reviewed-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200402114819.17232-4-jani.nikula@intel.com
2020-04-02 11:48:06 +00:00
|
|
|
|
long_hpd ? "long" : "short");
|
2014-06-18 01:29:35 +00:00
|
|
|
|
|
2016-10-03 07:55:15 +00:00
|
|
|
|
if (long_hpd) {
|
2017-02-08 00:54:11 +00:00
|
|
|
|
intel_dp->reset_link_params = true;
|
2016-10-03 07:55:15 +00:00
|
|
|
|
return IRQ_NONE;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
if (intel_dp->is_mst) {
|
2020-06-05 09:48:01 +00:00
|
|
|
|
if (!intel_dp_check_mst_status(intel_dp))
|
2019-05-09 17:34:43 +00:00
|
|
|
|
return IRQ_NONE;
|
2020-06-05 09:48:01 +00:00
|
|
|
|
} else if (!intel_dp_short_pulse(intel_dp)) {
|
|
|
|
|
return IRQ_NONE;
|
2014-05-02 04:02:48 +00:00
|
|
|
|
}
|
2015-01-23 05:00:31 +00:00
|
|
|
|
|
2019-05-09 17:34:43 +00:00
|
|
|
|
return IRQ_HANDLED;
|
2014-06-18 01:29:35 +00:00
|
|
|
|
}
|
|
|
|
|
|
2023-02-08 01:55:08 +00:00
|
|
|
|
static bool _intel_dp_is_port_edp(struct drm_i915_private *dev_priv,
|
|
|
|
|
const struct intel_bios_encoder_data *devdata,
|
|
|
|
|
enum port port)
|
2010-06-12 06:32:21 +00:00
|
|
|
|
{
|
2015-09-11 18:04:38 +00:00
|
|
|
|
/*
|
|
|
|
|
* eDP not supported on g4x. so bail out early just
|
|
|
|
|
* for a bit extra safety in case the VBT is bonkers.
|
|
|
|
|
*/
|
2021-03-20 04:42:42 +00:00
|
|
|
|
if (DISPLAY_VER(dev_priv) < 5)
|
2015-09-11 18:04:38 +00:00
|
|
|
|
return false;
|
|
|
|
|
|
2021-03-20 04:42:42 +00:00
|
|
|
|
if (DISPLAY_VER(dev_priv) < 9 && port == PORT_A)
|
2013-11-01 16:22:41 +00:00
|
|
|
|
return true;
|
|
|
|
|
|
2023-02-08 01:55:08 +00:00
|
|
|
|
return devdata && intel_bios_encoder_supports_edp(devdata);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
bool intel_dp_is_port_edp(struct drm_i915_private *i915, enum port port)
|
|
|
|
|
{
|
|
|
|
|
const struct intel_bios_encoder_data *devdata =
|
|
|
|
|
intel_bios_encoder_data_lookup(i915, port);
|
|
|
|
|
|
|
|
|
|
return _intel_dp_is_port_edp(i915, devdata, port);
|
2010-06-12 06:32:21 +00:00
|
|
|
|
}
|
|
|
|
|
|
2022-03-24 12:04:38 +00:00
|
|
|
|
static bool
|
2023-02-08 01:55:01 +00:00
|
|
|
|
has_gamut_metadata_dip(struct intel_encoder *encoder)
|
2022-03-24 12:04:38 +00:00
|
|
|
|
{
|
2023-02-08 01:55:01 +00:00
|
|
|
|
struct drm_i915_private *i915 = to_i915(encoder->base.dev);
|
|
|
|
|
enum port port = encoder->port;
|
|
|
|
|
|
|
|
|
|
if (intel_bios_encoder_is_lspcon(encoder->devdata))
|
2022-03-24 12:04:38 +00:00
|
|
|
|
return false;
|
|
|
|
|
|
|
|
|
|
if (DISPLAY_VER(i915) >= 11)
|
|
|
|
|
return true;
|
|
|
|
|
|
|
|
|
|
if (port == PORT_A)
|
|
|
|
|
return false;
|
|
|
|
|
|
|
|
|
|
if (IS_HASWELL(i915) || IS_BROADWELL(i915) ||
|
|
|
|
|
DISPLAY_VER(i915) >= 9)
|
|
|
|
|
return true;
|
|
|
|
|
|
|
|
|
|
return false;
|
|
|
|
|
}
|
|
|
|
|
|
2017-04-10 10:51:10 +00:00
|
|
|
|
static void
|
2010-09-19 08:29:33 +00:00
|
|
|
|
intel_dp_add_properties(struct intel_dp *intel_dp, struct drm_connector *connector)
|
|
|
|
|
{
|
2017-05-01 13:37:56 +00:00
|
|
|
|
struct drm_i915_private *dev_priv = to_i915(connector->dev);
|
2017-11-29 16:43:02 +00:00
|
|
|
|
enum port port = dp_to_dig_port(intel_dp)->base.port;
|
|
|
|
|
|
2020-04-24 12:50:52 +00:00
|
|
|
|
if (!intel_dp_is_edp(intel_dp))
|
|
|
|
|
drm_connector_attach_dp_subconnector_property(connector);
|
|
|
|
|
|
2017-11-29 16:43:02 +00:00
|
|
|
|
if (!IS_G4X(dev_priv) && port != PORT_A)
|
|
|
|
|
intel_attach_force_audio_property(connector);
|
2017-05-01 13:37:56 +00:00
|
|
|
|
|
2011-02-21 22:23:52 +00:00
|
|
|
|
intel_attach_broadcast_rgb_property(connector);
|
2019-02-04 22:25:38 +00:00
|
|
|
|
if (HAS_GMCH(dev_priv))
|
2018-10-23 01:44:00 +00:00
|
|
|
|
drm_connector_attach_max_bpc_property(connector, 6, 10);
|
2021-03-20 04:42:42 +00:00
|
|
|
|
else if (DISPLAY_VER(dev_priv) >= 5)
|
2018-10-23 01:44:00 +00:00
|
|
|
|
drm_connector_attach_max_bpc_property(connector, 6, 12);
|
2012-10-26 09:04:00 +00:00
|
|
|
|
|
2020-11-30 20:47:31 +00:00
|
|
|
|
/* Register HDMI colorspace for case of lspcon */
|
2023-02-08 01:55:01 +00:00
|
|
|
|
if (intel_bios_encoder_is_lspcon(dp_to_dig_port(intel_dp)->base.devdata)) {
|
2020-11-30 20:47:29 +00:00
|
|
|
|
drm_connector_attach_content_type_property(connector);
|
2020-11-30 20:47:31 +00:00
|
|
|
|
intel_attach_hdmi_colorspace_property(connector);
|
|
|
|
|
} else {
|
|
|
|
|
intel_attach_dp_colorspace_property(connector);
|
|
|
|
|
}
|
2019-09-19 19:53:08 +00:00
|
|
|
|
|
2023-02-08 01:55:01 +00:00
|
|
|
|
if (has_gamut_metadata_dip(&dp_to_dig_port(intel_dp)->base))
|
2022-03-21 19:50:05 +00:00
|
|
|
|
drm_connector_attach_hdr_output_metadata_property(connector);
|
2019-09-19 19:53:11 +00:00
|
|
|
|
|
2021-01-22 23:26:31 +00:00
|
|
|
|
if (HAS_VRR(dev_priv))
|
|
|
|
|
drm_connector_attach_vrr_capable_property(connector);
|
2010-09-19 08:29:33 +00:00
|
|
|
|
}
|
|
|
|
|
|
2022-03-23 18:29:29 +00:00
|
|
|
|
static void
|
2022-03-23 18:29:30 +00:00
|
|
|
|
intel_edp_add_properties(struct intel_dp *intel_dp)
|
2022-03-23 18:29:29 +00:00
|
|
|
|
{
|
|
|
|
|
struct intel_connector *connector = intel_dp->attached_connector;
|
|
|
|
|
struct drm_i915_private *i915 = to_i915(connector->base.dev);
|
2022-03-23 18:29:30 +00:00
|
|
|
|
const struct drm_display_mode *fixed_mode =
|
|
|
|
|
intel_panel_preferred_fixed_mode(connector);
|
2022-09-12 11:18:09 +00:00
|
|
|
|
|
2022-09-12 11:18:10 +00:00
|
|
|
|
intel_attach_scaling_mode_property(&connector->base);
|
2022-03-23 18:29:29 +00:00
|
|
|
|
|
|
|
|
|
drm_connector_set_panel_orientation_with_quirk(&connector->base,
|
2022-08-29 13:18:14 +00:00
|
|
|
|
i915->display.vbt.orientation,
|
2022-03-23 18:29:29 +00:00
|
|
|
|
fixed_mode->hdisplay,
|
|
|
|
|
fixed_mode->vdisplay);
|
|
|
|
|
}
|
|
|
|
|
|
2022-09-12 11:18:05 +00:00
|
|
|
|
static void intel_edp_backlight_setup(struct intel_dp *intel_dp,
|
|
|
|
|
struct intel_connector *connector)
|
|
|
|
|
{
|
|
|
|
|
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
|
|
|
|
|
enum pipe pipe = INVALID_PIPE;
|
|
|
|
|
|
|
|
|
|
if (IS_VALLEYVIEW(i915) || IS_CHERRYVIEW(i915)) {
|
|
|
|
|
/*
|
|
|
|
|
* Figure out the current pipe for the initial backlight setup.
|
|
|
|
|
* If the current pipe isn't valid, try the PPS pipe, and if that
|
|
|
|
|
* fails just assume pipe A.
|
|
|
|
|
*/
|
|
|
|
|
pipe = vlv_active_pipe(intel_dp);
|
|
|
|
|
|
|
|
|
|
if (pipe != PIPE_A && pipe != PIPE_B)
|
|
|
|
|
pipe = intel_dp->pps.pps_pipe;
|
|
|
|
|
|
|
|
|
|
if (pipe != PIPE_A && pipe != PIPE_B)
|
|
|
|
|
pipe = PIPE_A;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
intel_backlight_setup(connector, pipe);
|
|
|
|
|
}
|
|
|
|
|
|
2013-06-12 20:27:24 +00:00
|
|
|
|
static bool intel_edp_init_connector(struct intel_dp *intel_dp,
|
2014-10-16 18:27:30 +00:00
|
|
|
|
struct intel_connector *intel_connector)
|
2013-06-12 20:27:24 +00:00
|
|
|
|
{
|
2018-08-27 22:30:20 +00:00
|
|
|
|
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
|
2017-11-09 15:27:58 +00:00
|
|
|
|
struct drm_connector *connector = &intel_connector->base;
|
2022-03-31 11:28:13 +00:00
|
|
|
|
struct drm_display_mode *fixed_mode;
|
2022-06-20 06:51:38 +00:00
|
|
|
|
struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
|
2013-06-12 20:27:24 +00:00
|
|
|
|
bool has_dpcd;
|
2023-01-25 11:10:49 +00:00
|
|
|
|
const struct drm_edid *drm_edid;
|
2013-06-12 20:27:24 +00:00
|
|
|
|
|
2017-08-18 09:30:20 +00:00
|
|
|
|
if (!intel_dp_is_edp(intel_dp))
|
2013-06-12 20:27:24 +00:00
|
|
|
|
return true;
|
|
|
|
|
|
2016-06-21 08:51:47 +00:00
|
|
|
|
/*
|
|
|
|
|
* On IBX/CPT we may get here with LVDS already registered. Since the
|
|
|
|
|
* driver uses the only internal power sequencer available for both
|
|
|
|
|
* eDP and LVDS bail out early in this case to prevent interfering
|
|
|
|
|
* with an already powered-on LVDS power sequencer.
|
|
|
|
|
*/
|
2019-03-18 20:26:52 +00:00
|
|
|
|
if (intel_get_lvds_encoder(dev_priv)) {
|
2022-10-06 20:48:44 +00:00
|
|
|
|
drm_WARN_ON(&dev_priv->drm,
|
drm/i915/display/dp: Make WARN* drm specific where drm_device ptr is available
drm specific WARN* calls include device information in the
backtrace, so we know what device the warnings originate from.
Covert all the calls of WARN* with device specific drm_WARN*
variants in functions where drm_device or drm_i915_private struct
pointer is readily available.
The conversion was done automatically with below coccinelle semantic
patch. checkpatch errors/warnings are fixed manually.
@rule1@
identifier func, T;
@@
func(...) {
...
struct drm_device *T = ...;
<...
(
-WARN(
+drm_WARN(T,
...)
|
-WARN_ON(
+drm_WARN_ON(T,
...)
|
-WARN_ONCE(
+drm_WARN_ONCE(T,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(T,
...)
)
...>
}
@rule2@
identifier func, T;
@@
func(struct drm_device *T,...) {
<...
(
-WARN(
+drm_WARN(T,
...)
|
-WARN_ON(
+drm_WARN_ON(T,
...)
|
-WARN_ONCE(
+drm_WARN_ONCE(T,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(T,
...)
)
...>
}
@rule3@
identifier func, T;
@@
func(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-WARN(
+drm_WARN(&T->drm,
...)
|
-WARN_ON(
+drm_WARN_ON(&T->drm,
...)
|
-WARN_ONCE(
+drm_WARN_ONCE(&T->drm,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(&T->drm,
...)
)
...+>
}
@rule4@
identifier func, T;
@@
func(struct drm_i915_private *T,...) {
<+...
(
-WARN(
+drm_WARN(&T->drm,
...)
|
-WARN_ON(
+drm_WARN_ON(&T->drm,
...)
|
-WARN_ONCE(
+drm_WARN_ONCE(&T->drm,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(&T->drm,
...)
)
...+>
}
Signed-off-by: Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200220165507.16823-6-pankaj.laxminarayan.bharadiya@intel.com
2020-02-20 16:55:04 +00:00
|
|
|
|
!(HAS_PCH_IBX(dev_priv) || HAS_PCH_CPT(dev_priv)));
|
drm/i915/dp: conversion to struct drm_device logging macros.
This converts various instances of printk based logging macros in
i915/display/intel_dp.c with the new struct drm_device based logging
macros using the following coccinelle script:
@rule1@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@rule2@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
New checkpatch warnings were fixed manually.
v2: fix merge conflict with new changes in file.
Signed-off-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200122110844.2022-5-wambui.karugax@gmail.com
2020-01-22 11:08:42 +00:00
|
|
|
|
drm_info(&dev_priv->drm,
|
|
|
|
|
"LVDS was detected, not registering eDP\n");
|
2016-06-21 08:51:47 +00:00
|
|
|
|
|
|
|
|
|
return false;
|
|
|
|
|
}
|
|
|
|
|
|
2022-11-25 17:31:49 +00:00
|
|
|
|
intel_bios_init_panel_early(dev_priv, &intel_connector->panel,
|
|
|
|
|
encoder->devdata);
|
|
|
|
|
|
2022-11-25 17:31:53 +00:00
|
|
|
|
if (!intel_pps_init(intel_dp)) {
|
|
|
|
|
drm_info(&dev_priv->drm,
|
|
|
|
|
"[ENCODER:%d:%s] unusable PPS, disabling eDP\n",
|
|
|
|
|
encoder->base.base.id, encoder->base.name);
|
|
|
|
|
/*
|
|
|
|
|
* The BIOS may have still enabled VDD on the PPS even
|
|
|
|
|
* though it's unusable. Make sure we turn it back off
|
|
|
|
|
* and to release the power domain references/etc.
|
|
|
|
|
*/
|
|
|
|
|
goto out_vdd_off;
|
|
|
|
|
}
|
2014-04-22 22:55:42 +00:00
|
|
|
|
|
drm/i915: Check HPD live state during eDP probe
We need to untangle the mess where some SKL machines (at least)
declare both DDI A and DDI E to be present in their VBT, and
both using AUX A. DDI A is a ghost eDP, wheres DDI E may be a
real DP->VGA converter.
Currently that is handled by checking the VBT child devices
for conflicts before output probing. But that kind of solution
will not work for the ADL phantom dual eDP VBTs. I think on
those we just have to probe the eDP first. And would be nice
to use the same probe scheme for everything.
On these SKL systems if we probe DDI A first (which is only
natural given it's declared by VBT first) we will get an answer
via AUX, but it came from the DP->VGA converter hooked to the
DDI E, not DDI A. Thus we mistakenly register eDP on DDI A
and screw up the real DP device in DDI E.
To fix this let's check the HPD live state during the eDP probe.
If we got an answer via DPCD but HPD is still down let's assume
we got the answer from someone else.
Smoke tested on all my eDP machines (ilk,hsw-ult,tgl,adl) and
I also tested turning off all HPD hardware prior to loading
i915 to make sure it all comes up properly. And I simulated
the failure path too by not turning on HPD sense and that
correctly gave up on eDP.
I *think* Windows might just fully depend on HPD here. I
couldn't really find any other way they probe displays. And
I did find code where they also check the live state prior
to AUX transfers (something Imre and I have also talked
about perhaps doing). That would also solve this as we'd
not succeed in the eDP probe DPCD reads.
Other solutions I've considered:
- Reintrduce DDI strap checks on SKL. Unfortunately we just
don't have any idea how reliable they are on real production
hardware, and commit 5a2376d1360b ("drm/i915/skl: WaIgnoreDDIAStrap
is forever, always init DDI A") does suggest that not very.
Sadly that commit is very poor in details :/
Also the systems (Asrock B250M-HDV at least) fixed by commit
41e35ffb380b ("drm/i915: Favor last VBT child device with
conflicting AUX ch/DDC pin") might still not work since we
don't know what their straps indicate. Stupid me for not
asking the reporter to check those at the time :(
We have currently two CI machines (fi-cfl-guc,fi-cfl-8700k
both MS-7B54/Z370M) that also declare both DDI A and DDI E
in VBT to use AUX A, and on these the DDI A strap is also
set. There doesn't seem to be anything hooked up to either
DDI however. But given the DDI A strap is wrong on these it
might well be wrong on the Asrock too.
Most other CI machines seem to have straps that generally
match the VBT. fi-kbl-soraka is an exception though as DDI D
strap is not set, but it is declared in VBT as a DP++ port.
No idea if there's a real physical port to go with it or not.
- Some kind of quirk just for the cases where both DDI A and DDI E
are present in VBT. Might be feasible given we've ignored
DDI A in these cases up to now successfully. But feels rather
unsatisfactory, and not very future proof against funny VBTs.
References: https://bugs.freedesktop.org/show_bug.cgi?id=111966
Reviewed-by: Vinod Govindapillai <vinod.govindapillai@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20230417131728.7705-4-ville.syrjala@linux.intel.com
2023-04-17 13:17:27 +00:00
|
|
|
|
/*
|
|
|
|
|
* Enable HPD sense for live status check.
|
|
|
|
|
* intel_hpd_irq_setup() will turn it off again
|
|
|
|
|
* if it's no longer needed later.
|
|
|
|
|
*
|
|
|
|
|
* The DPCD probe below will make sure VDD is on.
|
|
|
|
|
*/
|
|
|
|
|
intel_hpd_enable_detection(encoder);
|
|
|
|
|
|
2013-06-12 20:27:24 +00:00
|
|
|
|
/* Cache DPCD and EDID for edp. */
|
2023-10-11 17:16:05 +00:00
|
|
|
|
has_dpcd = intel_edp_init_dpcd(intel_dp, intel_connector);
|
2013-06-12 20:27:24 +00:00
|
|
|
|
|
2016-07-29 13:52:39 +00:00
|
|
|
|
if (!has_dpcd) {
|
2013-06-12 20:27:24 +00:00
|
|
|
|
/* if this fails, presume the device is a ghost */
|
drm/i915/dp: conversion to struct drm_device logging macros.
This converts various instances of printk based logging macros in
i915/display/intel_dp.c with the new struct drm_device based logging
macros using the following coccinelle script:
@rule1@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@rule2@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
New checkpatch warnings were fixed manually.
v2: fix merge conflict with new changes in file.
Signed-off-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200122110844.2022-5-wambui.karugax@gmail.com
2020-01-22 11:08:42 +00:00
|
|
|
|
drm_info(&dev_priv->drm,
|
2022-09-12 11:18:12 +00:00
|
|
|
|
"[ENCODER:%d:%s] failed to retrieve link info, disabling eDP\n",
|
|
|
|
|
encoder->base.base.id, encoder->base.name);
|
2016-06-21 08:51:49 +00:00
|
|
|
|
goto out_vdd_off;
|
2013-06-12 20:27:24 +00:00
|
|
|
|
}
|
|
|
|
|
|
drm/i915: Check HPD live state during eDP probe
We need to untangle the mess where some SKL machines (at least)
declare both DDI A and DDI E to be present in their VBT, and
both using AUX A. DDI A is a ghost eDP, wheres DDI E may be a
real DP->VGA converter.
Currently that is handled by checking the VBT child devices
for conflicts before output probing. But that kind of solution
will not work for the ADL phantom dual eDP VBTs. I think on
those we just have to probe the eDP first. And would be nice
to use the same probe scheme for everything.
On these SKL systems if we probe DDI A first (which is only
natural given it's declared by VBT first) we will get an answer
via AUX, but it came from the DP->VGA converter hooked to the
DDI E, not DDI A. Thus we mistakenly register eDP on DDI A
and screw up the real DP device in DDI E.
To fix this let's check the HPD live state during the eDP probe.
If we got an answer via DPCD but HPD is still down let's assume
we got the answer from someone else.
Smoke tested on all my eDP machines (ilk,hsw-ult,tgl,adl) and
I also tested turning off all HPD hardware prior to loading
i915 to make sure it all comes up properly. And I simulated
the failure path too by not turning on HPD sense and that
correctly gave up on eDP.
I *think* Windows might just fully depend on HPD here. I
couldn't really find any other way they probe displays. And
I did find code where they also check the live state prior
to AUX transfers (something Imre and I have also talked
about perhaps doing). That would also solve this as we'd
not succeed in the eDP probe DPCD reads.
Other solutions I've considered:
- Reintrduce DDI strap checks on SKL. Unfortunately we just
don't have any idea how reliable they are on real production
hardware, and commit 5a2376d1360b ("drm/i915/skl: WaIgnoreDDIAStrap
is forever, always init DDI A") does suggest that not very.
Sadly that commit is very poor in details :/
Also the systems (Asrock B250M-HDV at least) fixed by commit
41e35ffb380b ("drm/i915: Favor last VBT child device with
conflicting AUX ch/DDC pin") might still not work since we
don't know what their straps indicate. Stupid me for not
asking the reporter to check those at the time :(
We have currently two CI machines (fi-cfl-guc,fi-cfl-8700k
both MS-7B54/Z370M) that also declare both DDI A and DDI E
in VBT to use AUX A, and on these the DDI A strap is also
set. There doesn't seem to be anything hooked up to either
DDI however. But given the DDI A strap is wrong on these it
might well be wrong on the Asrock too.
Most other CI machines seem to have straps that generally
match the VBT. fi-kbl-soraka is an exception though as DDI D
strap is not set, but it is declared in VBT as a DP++ port.
No idea if there's a real physical port to go with it or not.
- Some kind of quirk just for the cases where both DDI A and DDI E
are present in VBT. Might be feasible given we've ignored
DDI A in these cases up to now successfully. But feels rather
unsatisfactory, and not very future proof against funny VBTs.
References: https://bugs.freedesktop.org/show_bug.cgi?id=111966
Reviewed-by: Vinod Govindapillai <vinod.govindapillai@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20230417131728.7705-4-ville.syrjala@linux.intel.com
2023-04-17 13:17:27 +00:00
|
|
|
|
/*
|
|
|
|
|
* VBT and straps are liars. Also check HPD as that seems
|
|
|
|
|
* to be the most reliable piece of information available.
|
2023-09-08 05:25:27 +00:00
|
|
|
|
*
|
|
|
|
|
* ... expect on devices that forgot to hook HPD up for eDP
|
|
|
|
|
* (eg. Acer Chromebook C710), so we'll check it only if multiple
|
|
|
|
|
* ports are attempting to use the same AUX CH, according to VBT.
|
drm/i915: Check HPD live state during eDP probe
We need to untangle the mess where some SKL machines (at least)
declare both DDI A and DDI E to be present in their VBT, and
both using AUX A. DDI A is a ghost eDP, wheres DDI E may be a
real DP->VGA converter.
Currently that is handled by checking the VBT child devices
for conflicts before output probing. But that kind of solution
will not work for the ADL phantom dual eDP VBTs. I think on
those we just have to probe the eDP first. And would be nice
to use the same probe scheme for everything.
On these SKL systems if we probe DDI A first (which is only
natural given it's declared by VBT first) we will get an answer
via AUX, but it came from the DP->VGA converter hooked to the
DDI E, not DDI A. Thus we mistakenly register eDP on DDI A
and screw up the real DP device in DDI E.
To fix this let's check the HPD live state during the eDP probe.
If we got an answer via DPCD but HPD is still down let's assume
we got the answer from someone else.
Smoke tested on all my eDP machines (ilk,hsw-ult,tgl,adl) and
I also tested turning off all HPD hardware prior to loading
i915 to make sure it all comes up properly. And I simulated
the failure path too by not turning on HPD sense and that
correctly gave up on eDP.
I *think* Windows might just fully depend on HPD here. I
couldn't really find any other way they probe displays. And
I did find code where they also check the live state prior
to AUX transfers (something Imre and I have also talked
about perhaps doing). That would also solve this as we'd
not succeed in the eDP probe DPCD reads.
Other solutions I've considered:
- Reintrduce DDI strap checks on SKL. Unfortunately we just
don't have any idea how reliable they are on real production
hardware, and commit 5a2376d1360b ("drm/i915/skl: WaIgnoreDDIAStrap
is forever, always init DDI A") does suggest that not very.
Sadly that commit is very poor in details :/
Also the systems (Asrock B250M-HDV at least) fixed by commit
41e35ffb380b ("drm/i915: Favor last VBT child device with
conflicting AUX ch/DDC pin") might still not work since we
don't know what their straps indicate. Stupid me for not
asking the reporter to check those at the time :(
We have currently two CI machines (fi-cfl-guc,fi-cfl-8700k
both MS-7B54/Z370M) that also declare both DDI A and DDI E
in VBT to use AUX A, and on these the DDI A strap is also
set. There doesn't seem to be anything hooked up to either
DDI however. But given the DDI A strap is wrong on these it
might well be wrong on the Asrock too.
Most other CI machines seem to have straps that generally
match the VBT. fi-kbl-soraka is an exception though as DDI D
strap is not set, but it is declared in VBT as a DP++ port.
No idea if there's a real physical port to go with it or not.
- Some kind of quirk just for the cases where both DDI A and DDI E
are present in VBT. Might be feasible given we've ignored
DDI A in these cases up to now successfully. But feels rather
unsatisfactory, and not very future proof against funny VBTs.
References: https://bugs.freedesktop.org/show_bug.cgi?id=111966
Reviewed-by: Vinod Govindapillai <vinod.govindapillai@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20230417131728.7705-4-ville.syrjala@linux.intel.com
2023-04-17 13:17:27 +00:00
|
|
|
|
*/
|
2023-11-14 14:23:33 +00:00
|
|
|
|
if (intel_bios_dp_has_shared_aux_ch(encoder->devdata)) {
|
drm/i915: Check HPD live state during eDP probe
We need to untangle the mess where some SKL machines (at least)
declare both DDI A and DDI E to be present in their VBT, and
both using AUX A. DDI A is a ghost eDP, wheres DDI E may be a
real DP->VGA converter.
Currently that is handled by checking the VBT child devices
for conflicts before output probing. But that kind of solution
will not work for the ADL phantom dual eDP VBTs. I think on
those we just have to probe the eDP first. And would be nice
to use the same probe scheme for everything.
On these SKL systems if we probe DDI A first (which is only
natural given it's declared by VBT first) we will get an answer
via AUX, but it came from the DP->VGA converter hooked to the
DDI E, not DDI A. Thus we mistakenly register eDP on DDI A
and screw up the real DP device in DDI E.
To fix this let's check the HPD live state during the eDP probe.
If we got an answer via DPCD but HPD is still down let's assume
we got the answer from someone else.
Smoke tested on all my eDP machines (ilk,hsw-ult,tgl,adl) and
I also tested turning off all HPD hardware prior to loading
i915 to make sure it all comes up properly. And I simulated
the failure path too by not turning on HPD sense and that
correctly gave up on eDP.
I *think* Windows might just fully depend on HPD here. I
couldn't really find any other way they probe displays. And
I did find code where they also check the live state prior
to AUX transfers (something Imre and I have also talked
about perhaps doing). That would also solve this as we'd
not succeed in the eDP probe DPCD reads.
Other solutions I've considered:
- Reintrduce DDI strap checks on SKL. Unfortunately we just
don't have any idea how reliable they are on real production
hardware, and commit 5a2376d1360b ("drm/i915/skl: WaIgnoreDDIAStrap
is forever, always init DDI A") does suggest that not very.
Sadly that commit is very poor in details :/
Also the systems (Asrock B250M-HDV at least) fixed by commit
41e35ffb380b ("drm/i915: Favor last VBT child device with
conflicting AUX ch/DDC pin") might still not work since we
don't know what their straps indicate. Stupid me for not
asking the reporter to check those at the time :(
We have currently two CI machines (fi-cfl-guc,fi-cfl-8700k
both MS-7B54/Z370M) that also declare both DDI A and DDI E
in VBT to use AUX A, and on these the DDI A strap is also
set. There doesn't seem to be anything hooked up to either
DDI however. But given the DDI A strap is wrong on these it
might well be wrong on the Asrock too.
Most other CI machines seem to have straps that generally
match the VBT. fi-kbl-soraka is an exception though as DDI D
strap is not set, but it is declared in VBT as a DP++ port.
No idea if there's a real physical port to go with it or not.
- Some kind of quirk just for the cases where both DDI A and DDI E
are present in VBT. Might be feasible given we've ignored
DDI A in these cases up to now successfully. But feels rather
unsatisfactory, and not very future proof against funny VBTs.
References: https://bugs.freedesktop.org/show_bug.cgi?id=111966
Reviewed-by: Vinod Govindapillai <vinod.govindapillai@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20230417131728.7705-4-ville.syrjala@linux.intel.com
2023-04-17 13:17:27 +00:00
|
|
|
|
/*
|
|
|
|
|
* If this fails, presume the DPCD answer came
|
|
|
|
|
* from some other port using the same AUX CH.
|
|
|
|
|
*
|
|
|
|
|
* FIXME maybe cleaner to check this before the
|
|
|
|
|
* DPCD read? Would need sort out the VDD handling...
|
|
|
|
|
*/
|
2023-11-14 14:23:33 +00:00
|
|
|
|
if (!intel_digital_port_connected(encoder)) {
|
|
|
|
|
drm_info(&dev_priv->drm,
|
|
|
|
|
"[ENCODER:%d:%s] HPD is down, disabling eDP\n",
|
|
|
|
|
encoder->base.base.id, encoder->base.name);
|
|
|
|
|
goto out_vdd_off;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
|
* Unfortunately even the HPD based detection fails on
|
|
|
|
|
* eg. Asus B360M-A (CFL+CNP), so as a last resort fall
|
|
|
|
|
* back to checking for a VGA branch device. Only do this
|
|
|
|
|
* on known affected platforms to minimize false positives.
|
|
|
|
|
*/
|
|
|
|
|
if (DISPLAY_VER(dev_priv) == 9 && drm_dp_is_branch(intel_dp->dpcd) &&
|
|
|
|
|
(intel_dp->dpcd[DP_DOWNSTREAMPORT_PRESENT] & DP_DWN_STRM_PORT_TYPE_MASK) ==
|
|
|
|
|
DP_DWN_STRM_PORT_TYPE_ANALOG) {
|
|
|
|
|
drm_info(&dev_priv->drm,
|
|
|
|
|
"[ENCODER:%d:%s] VGA converter detected, disabling eDP\n",
|
|
|
|
|
encoder->base.base.id, encoder->base.name);
|
|
|
|
|
goto out_vdd_off;
|
|
|
|
|
}
|
drm/i915: Check HPD live state during eDP probe
We need to untangle the mess where some SKL machines (at least)
declare both DDI A and DDI E to be present in their VBT, and
both using AUX A. DDI A is a ghost eDP, wheres DDI E may be a
real DP->VGA converter.
Currently that is handled by checking the VBT child devices
for conflicts before output probing. But that kind of solution
will not work for the ADL phantom dual eDP VBTs. I think on
those we just have to probe the eDP first. And would be nice
to use the same probe scheme for everything.
On these SKL systems if we probe DDI A first (which is only
natural given it's declared by VBT first) we will get an answer
via AUX, but it came from the DP->VGA converter hooked to the
DDI E, not DDI A. Thus we mistakenly register eDP on DDI A
and screw up the real DP device in DDI E.
To fix this let's check the HPD live state during the eDP probe.
If we got an answer via DPCD but HPD is still down let's assume
we got the answer from someone else.
Smoke tested on all my eDP machines (ilk,hsw-ult,tgl,adl) and
I also tested turning off all HPD hardware prior to loading
i915 to make sure it all comes up properly. And I simulated
the failure path too by not turning on HPD sense and that
correctly gave up on eDP.
I *think* Windows might just fully depend on HPD here. I
couldn't really find any other way they probe displays. And
I did find code where they also check the live state prior
to AUX transfers (something Imre and I have also talked
about perhaps doing). That would also solve this as we'd
not succeed in the eDP probe DPCD reads.
Other solutions I've considered:
- Reintrduce DDI strap checks on SKL. Unfortunately we just
don't have any idea how reliable they are on real production
hardware, and commit 5a2376d1360b ("drm/i915/skl: WaIgnoreDDIAStrap
is forever, always init DDI A") does suggest that not very.
Sadly that commit is very poor in details :/
Also the systems (Asrock B250M-HDV at least) fixed by commit
41e35ffb380b ("drm/i915: Favor last VBT child device with
conflicting AUX ch/DDC pin") might still not work since we
don't know what their straps indicate. Stupid me for not
asking the reporter to check those at the time :(
We have currently two CI machines (fi-cfl-guc,fi-cfl-8700k
both MS-7B54/Z370M) that also declare both DDI A and DDI E
in VBT to use AUX A, and on these the DDI A strap is also
set. There doesn't seem to be anything hooked up to either
DDI however. But given the DDI A strap is wrong on these it
might well be wrong on the Asrock too.
Most other CI machines seem to have straps that generally
match the VBT. fi-kbl-soraka is an exception though as DDI D
strap is not set, but it is declared in VBT as a DP++ port.
No idea if there's a real physical port to go with it or not.
- Some kind of quirk just for the cases where both DDI A and DDI E
are present in VBT. Might be feasible given we've ignored
DDI A in these cases up to now successfully. But feels rather
unsatisfactory, and not very future proof against funny VBTs.
References: https://bugs.freedesktop.org/show_bug.cgi?id=111966
Reviewed-by: Vinod Govindapillai <vinod.govindapillai@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20230417131728.7705-4-ville.syrjala@linux.intel.com
2023-04-17 13:17:27 +00:00
|
|
|
|
}
|
|
|
|
|
|
2022-10-06 20:48:44 +00:00
|
|
|
|
mutex_lock(&dev_priv->drm.mode_config.mutex);
|
2023-08-29 11:39:15 +00:00
|
|
|
|
drm_edid = drm_edid_read_ddc(connector, connector->ddc);
|
2023-01-25 11:10:49 +00:00
|
|
|
|
if (!drm_edid) {
|
2021-12-29 22:21:59 +00:00
|
|
|
|
/* Fallback to EDID from ACPI OpRegion, if any */
|
2023-01-25 11:10:51 +00:00
|
|
|
|
drm_edid = intel_opregion_get_edid(intel_connector);
|
|
|
|
|
if (drm_edid)
|
2021-12-29 22:21:59 +00:00
|
|
|
|
drm_dbg_kms(&dev_priv->drm,
|
|
|
|
|
"[CONNECTOR:%d:%s] Using OpRegion EDID\n",
|
|
|
|
|
connector->base.id, connector->name);
|
2023-01-25 11:10:49 +00:00
|
|
|
|
}
|
|
|
|
|
if (drm_edid) {
|
|
|
|
|
if (drm_edid_connector_update(connector, drm_edid) ||
|
|
|
|
|
!drm_edid_connector_add_modes(connector)) {
|
|
|
|
|
drm_edid_connector_update(connector, NULL);
|
|
|
|
|
drm_edid_free(drm_edid);
|
|
|
|
|
drm_edid = ERR_PTR(-EINVAL);
|
2013-06-12 20:27:24 +00:00
|
|
|
|
}
|
|
|
|
|
} else {
|
2023-01-25 11:10:49 +00:00
|
|
|
|
drm_edid = ERR_PTR(-ENOENT);
|
2013-06-12 20:27:24 +00:00
|
|
|
|
}
|
|
|
|
|
|
2023-01-25 11:10:49 +00:00
|
|
|
|
intel_bios_init_panel_late(dev_priv, &intel_connector->panel, encoder->devdata,
|
2023-01-25 11:10:50 +00:00
|
|
|
|
IS_ERR(drm_edid) ? NULL : drm_edid);
|
2022-05-10 10:42:39 +00:00
|
|
|
|
|
2022-09-27 18:06:14 +00:00
|
|
|
|
intel_panel_add_edid_fixed_modes(intel_connector, true);
|
2013-06-12 20:27:24 +00:00
|
|
|
|
|
2021-08-31 14:17:34 +00:00
|
|
|
|
/* MSO requires information from the EDID */
|
|
|
|
|
intel_edp_mso_init(intel_dp);
|
|
|
|
|
|
2021-03-02 11:03:01 +00:00
|
|
|
|
/* multiply the mode clock and horizontal timings for MSO */
|
2022-03-31 11:28:13 +00:00
|
|
|
|
list_for_each_entry(fixed_mode, &intel_connector->panel.fixed_modes, head)
|
|
|
|
|
intel_edp_mso_mode_fixup(intel_connector, fixed_mode);
|
2021-03-02 11:03:01 +00:00
|
|
|
|
|
2013-06-12 20:27:24 +00:00
|
|
|
|
/* fallback to VBT if available for eDP */
|
2022-03-31 11:28:13 +00:00
|
|
|
|
if (!intel_panel_preferred_fixed_mode(intel_connector))
|
|
|
|
|
intel_panel_add_vbt_lfp_fixed_mode(intel_connector);
|
|
|
|
|
|
2022-10-06 20:48:44 +00:00
|
|
|
|
mutex_unlock(&dev_priv->drm.mode_config.mutex);
|
2013-06-12 20:27:24 +00:00
|
|
|
|
|
2022-09-12 11:18:12 +00:00
|
|
|
|
if (!intel_panel_preferred_fixed_mode(intel_connector)) {
|
|
|
|
|
drm_info(&dev_priv->drm,
|
|
|
|
|
"[ENCODER:%d:%s] failed to find fixed mode for the panel, disabling eDP\n",
|
|
|
|
|
encoder->base.base.id, encoder->base.name);
|
|
|
|
|
goto out_vdd_off;
|
|
|
|
|
}
|
|
|
|
|
|
2023-01-25 11:10:52 +00:00
|
|
|
|
intel_panel_init(intel_connector, drm_edid);
|
2022-03-31 11:28:13 +00:00
|
|
|
|
|
2022-09-12 11:18:05 +00:00
|
|
|
|
intel_edp_backlight_setup(intel_dp, intel_connector);
|
2013-06-12 20:27:24 +00:00
|
|
|
|
|
2022-03-23 18:29:30 +00:00
|
|
|
|
intel_edp_add_properties(intel_dp);
|
2018-09-09 13:34:56 +00:00
|
|
|
|
|
2022-05-10 10:42:34 +00:00
|
|
|
|
intel_pps_init_late(intel_dp);
|
|
|
|
|
|
2013-06-12 20:27:24 +00:00
|
|
|
|
return true;
|
2016-06-21 08:51:49 +00:00
|
|
|
|
|
|
|
|
|
out_vdd_off:
|
2021-01-08 17:44:14 +00:00
|
|
|
|
intel_pps_vdd_off_sync(intel_dp);
|
2016-06-21 08:51:49 +00:00
|
|
|
|
|
|
|
|
|
return false;
|
2013-06-12 20:27:24 +00:00
|
|
|
|
}
|
|
|
|
|
|
2017-04-06 13:44:19 +00:00
|
|
|
|
static void intel_dp_modeset_retry_work_fn(struct work_struct *work)
|
|
|
|
|
{
|
|
|
|
|
struct intel_connector *intel_connector;
|
|
|
|
|
struct drm_connector *connector;
|
|
|
|
|
|
|
|
|
|
intel_connector = container_of(work, typeof(*intel_connector),
|
|
|
|
|
modeset_retry_work);
|
|
|
|
|
connector = &intel_connector->base;
|
2022-01-21 13:00:33 +00:00
|
|
|
|
drm_dbg_kms(connector->dev, "[CONNECTOR:%d:%s]\n", connector->base.id,
|
|
|
|
|
connector->name);
|
2017-04-06 13:44:19 +00:00
|
|
|
|
|
|
|
|
|
/* Grab the locks before changing connector property*/
|
|
|
|
|
mutex_lock(&connector->dev->mode_config.mutex);
|
|
|
|
|
/* Set connector link status to BAD and send a Uevent to notify
|
|
|
|
|
* userspace to do a modeset.
|
|
|
|
|
*/
|
2018-07-09 08:40:08 +00:00
|
|
|
|
drm_connector_set_link_status_property(connector,
|
|
|
|
|
DRM_MODE_LINK_STATUS_BAD);
|
2017-04-06 13:44:19 +00:00
|
|
|
|
mutex_unlock(&connector->dev->mode_config.mutex);
|
|
|
|
|
/* Send Hotplug uevent so userspace can reprobe */
|
2021-10-18 08:47:31 +00:00
|
|
|
|
drm_kms_helper_connector_hotplug_event(connector);
|
2017-04-06 13:44:19 +00:00
|
|
|
|
}
|
|
|
|
|
|
2013-06-12 20:27:25 +00:00
|
|
|
|
bool
|
2020-07-01 04:50:54 +00:00
|
|
|
|
intel_dp_init_connector(struct intel_digital_port *dig_port,
|
2012-10-26 21:05:48 +00:00
|
|
|
|
struct intel_connector *intel_connector)
|
2009-04-07 23:16:42 +00:00
|
|
|
|
{
|
2012-10-26 21:05:48 +00:00
|
|
|
|
struct drm_connector *connector = &intel_connector->base;
|
2020-07-01 04:50:54 +00:00
|
|
|
|
struct intel_dp *intel_dp = &dig_port->dp;
|
|
|
|
|
struct intel_encoder *intel_encoder = &dig_port->base;
|
2012-10-26 21:05:48 +00:00
|
|
|
|
struct drm_device *dev = intel_encoder->base.dev;
|
2016-07-04 10:34:36 +00:00
|
|
|
|
struct drm_i915_private *dev_priv = to_i915(dev);
|
2017-11-09 15:24:34 +00:00
|
|
|
|
enum port port = intel_encoder->port;
|
2019-07-09 18:39:33 +00:00
|
|
|
|
enum phy phy = intel_port_to_phy(dev_priv, port);
|
2016-06-24 13:00:14 +00:00
|
|
|
|
int type;
|
2009-04-07 23:16:42 +00:00
|
|
|
|
|
2017-04-06 13:44:19 +00:00
|
|
|
|
/* Initialize the work for modeset in case of link train failure */
|
|
|
|
|
INIT_WORK(&intel_connector->modeset_retry_work,
|
|
|
|
|
intel_dp_modeset_retry_work_fn);
|
|
|
|
|
|
2020-07-01 04:50:54 +00:00
|
|
|
|
if (drm_WARN(dev, dig_port->max_lanes < 1,
|
drm/i915/display/dp: Make WARN* drm specific where drm_device ptr is available
drm specific WARN* calls include device information in the
backtrace, so we know what device the warnings originate from.
Covert all the calls of WARN* with device specific drm_WARN*
variants in functions where drm_device or drm_i915_private struct
pointer is readily available.
The conversion was done automatically with below coccinelle semantic
patch. checkpatch errors/warnings are fixed manually.
@rule1@
identifier func, T;
@@
func(...) {
...
struct drm_device *T = ...;
<...
(
-WARN(
+drm_WARN(T,
...)
|
-WARN_ON(
+drm_WARN_ON(T,
...)
|
-WARN_ONCE(
+drm_WARN_ONCE(T,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(T,
...)
)
...>
}
@rule2@
identifier func, T;
@@
func(struct drm_device *T,...) {
<...
(
-WARN(
+drm_WARN(T,
...)
|
-WARN_ON(
+drm_WARN_ON(T,
...)
|
-WARN_ONCE(
+drm_WARN_ONCE(T,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(T,
...)
)
...>
}
@rule3@
identifier func, T;
@@
func(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-WARN(
+drm_WARN(&T->drm,
...)
|
-WARN_ON(
+drm_WARN_ON(&T->drm,
...)
|
-WARN_ONCE(
+drm_WARN_ONCE(&T->drm,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(&T->drm,
...)
)
...+>
}
@rule4@
identifier func, T;
@@
func(struct drm_i915_private *T,...) {
<+...
(
-WARN(
+drm_WARN(&T->drm,
...)
|
-WARN_ON(
+drm_WARN_ON(&T->drm,
...)
|
-WARN_ONCE(
+drm_WARN_ONCE(&T->drm,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(&T->drm,
...)
)
...+>
}
Signed-off-by: Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200220165507.16823-6-pankaj.laxminarayan.bharadiya@intel.com
2020-02-20 16:55:04 +00:00
|
|
|
|
"Not enough lanes (%d) for DP on [ENCODER:%d:%s]\n",
|
2020-07-01 04:50:54 +00:00
|
|
|
|
dig_port->max_lanes, intel_encoder->base.base.id,
|
drm/i915/display/dp: Make WARN* drm specific where drm_device ptr is available
drm specific WARN* calls include device information in the
backtrace, so we know what device the warnings originate from.
Covert all the calls of WARN* with device specific drm_WARN*
variants in functions where drm_device or drm_i915_private struct
pointer is readily available.
The conversion was done automatically with below coccinelle semantic
patch. checkpatch errors/warnings are fixed manually.
@rule1@
identifier func, T;
@@
func(...) {
...
struct drm_device *T = ...;
<...
(
-WARN(
+drm_WARN(T,
...)
|
-WARN_ON(
+drm_WARN_ON(T,
...)
|
-WARN_ONCE(
+drm_WARN_ONCE(T,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(T,
...)
)
...>
}
@rule2@
identifier func, T;
@@
func(struct drm_device *T,...) {
<...
(
-WARN(
+drm_WARN(T,
...)
|
-WARN_ON(
+drm_WARN_ON(T,
...)
|
-WARN_ONCE(
+drm_WARN_ONCE(T,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(T,
...)
)
...>
}
@rule3@
identifier func, T;
@@
func(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-WARN(
+drm_WARN(&T->drm,
...)
|
-WARN_ON(
+drm_WARN_ON(&T->drm,
...)
|
-WARN_ONCE(
+drm_WARN_ONCE(&T->drm,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(&T->drm,
...)
)
...+>
}
@rule4@
identifier func, T;
@@
func(struct drm_i915_private *T,...) {
<+...
(
-WARN(
+drm_WARN(&T->drm,
...)
|
-WARN_ON(
+drm_WARN_ON(&T->drm,
...)
|
-WARN_ONCE(
+drm_WARN_ONCE(&T->drm,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(&T->drm,
...)
)
...+>
}
Signed-off-by: Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200220165507.16823-6-pankaj.laxminarayan.bharadiya@intel.com
2020-02-20 16:55:04 +00:00
|
|
|
|
intel_encoder->base.name))
|
2015-12-08 17:59:38 +00:00
|
|
|
|
return false;
|
|
|
|
|
|
2017-02-08 00:54:11 +00:00
|
|
|
|
intel_dp->reset_link_params = true;
|
2021-01-20 10:18:33 +00:00
|
|
|
|
intel_dp->pps.pps_pipe = INVALID_PIPE;
|
|
|
|
|
intel_dp->pps.active_pipe = INVALID_PIPE;
|
2014-09-04 11:54:20 +00:00
|
|
|
|
|
2012-09-06 20:15:42 +00:00
|
|
|
|
/* Preserve the current hw state. */
|
drm/i915/dp: use intel_de_*() functions for register access
The implicit "dev_priv" local variable use has been a long-standing pain
point in the register access macros I915_READ(), I915_WRITE(),
POSTING_READ(), I915_READ_FW(), and I915_WRITE_FW().
Replace them with the corresponding new display engine register
accessors intel_de_read(), intel_de_write(), intel_de_posting_read(),
intel_de_read_fw(), and intel_de_write_fw().
No functional changes.
Generated using the following semantic patch:
@@
expression REG, OFFSET;
@@
- I915_READ(REG)
+ intel_de_read(dev_priv, REG)
@@
expression REG, OFFSET;
@@
- POSTING_READ(REG)
+ intel_de_posting_read(dev_priv, REG)
@@
expression REG, OFFSET;
@@
- I915_WRITE(REG, OFFSET)
+ intel_de_write(dev_priv, REG, OFFSET)
@@
expression REG;
@@
- I915_READ_FW(REG)
+ intel_de_read_fw(dev_priv, REG)
@@
expression REG, OFFSET;
@@
- I915_WRITE_FW(REG, OFFSET)
+ intel_de_write_fw(dev_priv, REG, OFFSET)
Acked-by: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Acked-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/abcb2d44fd4d6e5f995a3520b327f746ae90428a.1580149467.git.jani.nikula@intel.com
2020-01-27 18:26:08 +00:00
|
|
|
|
intel_dp->DP = intel_de_read(dev_priv, intel_dp->output_reg);
|
2012-10-19 11:51:50 +00:00
|
|
|
|
intel_dp->attached_connector = intel_connector;
|
2011-02-12 10:33:12 +00:00
|
|
|
|
|
2023-02-08 01:55:08 +00:00
|
|
|
|
if (_intel_dp_is_port_edp(dev_priv, intel_encoder->devdata, port)) {
|
2019-05-09 17:34:46 +00:00
|
|
|
|
/*
|
|
|
|
|
* Currently we don't support eDP on TypeC ports, although in
|
|
|
|
|
* theory it could work on TypeC legacy ports.
|
|
|
|
|
*/
|
drm/i915/display/dp: Make WARN* drm specific where drm_device ptr is available
drm specific WARN* calls include device information in the
backtrace, so we know what device the warnings originate from.
Covert all the calls of WARN* with device specific drm_WARN*
variants in functions where drm_device or drm_i915_private struct
pointer is readily available.
The conversion was done automatically with below coccinelle semantic
patch. checkpatch errors/warnings are fixed manually.
@rule1@
identifier func, T;
@@
func(...) {
...
struct drm_device *T = ...;
<...
(
-WARN(
+drm_WARN(T,
...)
|
-WARN_ON(
+drm_WARN_ON(T,
...)
|
-WARN_ONCE(
+drm_WARN_ONCE(T,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(T,
...)
)
...>
}
@rule2@
identifier func, T;
@@
func(struct drm_device *T,...) {
<...
(
-WARN(
+drm_WARN(T,
...)
|
-WARN_ON(
+drm_WARN_ON(T,
...)
|
-WARN_ONCE(
+drm_WARN_ONCE(T,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(T,
...)
)
...>
}
@rule3@
identifier func, T;
@@
func(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-WARN(
+drm_WARN(&T->drm,
...)
|
-WARN_ON(
+drm_WARN_ON(&T->drm,
...)
|
-WARN_ONCE(
+drm_WARN_ONCE(&T->drm,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(&T->drm,
...)
)
...+>
}
@rule4@
identifier func, T;
@@
func(struct drm_i915_private *T,...) {
<+...
(
-WARN(
+drm_WARN(&T->drm,
...)
|
-WARN_ON(
+drm_WARN_ON(&T->drm,
...)
|
-WARN_ONCE(
+drm_WARN_ONCE(&T->drm,
...)
|
-WARN_ON_ONCE(
+drm_WARN_ON_ONCE(&T->drm,
...)
)
...+>
}
Signed-off-by: Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200220165507.16823-6-pankaj.laxminarayan.bharadiya@intel.com
2020-02-20 16:55:04 +00:00
|
|
|
|
drm_WARN_ON(dev, intel_phy_is_tc(dev_priv, phy));
|
2010-07-16 18:46:28 +00:00
|
|
|
|
type = DRM_MODE_CONNECTOR_eDP;
|
2021-09-01 16:03:58 +00:00
|
|
|
|
intel_encoder->type = INTEL_OUTPUT_EDP;
|
|
|
|
|
|
|
|
|
|
/* eDP only on port B and/or C on vlv/chv */
|
|
|
|
|
if (drm_WARN_ON(dev, (IS_VALLEYVIEW(dev_priv) ||
|
|
|
|
|
IS_CHERRYVIEW(dev_priv)) &&
|
|
|
|
|
port != PORT_B && port != PORT_C))
|
|
|
|
|
return false;
|
2019-05-09 17:34:46 +00:00
|
|
|
|
} else {
|
2013-11-01 16:22:41 +00:00
|
|
|
|
type = DRM_MODE_CONNECTOR_DisplayPort;
|
2019-05-09 17:34:46 +00:00
|
|
|
|
}
|
2010-07-16 18:46:28 +00:00
|
|
|
|
|
drm/i915/dp: Ensure sink rate values are always valid
Atm, there are no sink rate values set for DP (vs. eDP) sinks until the
DPCD capabilities are successfully read from the sink. During this time
intel_dp->num_common_rates is 0 which can lead to a
intel_dp->common_rates[-1] (*)
access, which is an undefined behaviour, in the following cases:
- In intel_dp_sync_state(), if the encoder is enabled without a sink
connected to the encoder's connector (BIOS enabled a monitor, but the
user unplugged the monitor until the driver loaded).
- In intel_dp_sync_state() if the encoder is enabled with a sink
connected, but for some reason the DPCD read has failed.
- In intel_dp_compute_link_config() if modesetting a connector without
a sink connected on it.
- In intel_dp_compute_link_config() if modesetting a connector with a
a sink connected on it, but before probing the connector first.
To avoid the (*) access in all the above cases, make sure that the sink
rate table - and hence the common rate table - is always valid, by
setting a default minimum sink rate when registering the connector
before anything could use it.
I also considered setting all the DP link rates by default, so that
modesetting with higher resolution modes also succeeds in the last two
cases above. However in case a sink is not connected that would stop
working after the first modeset, due to the LT fallback logic. So this
would need more work, beyond the scope of this fix.
As I mentioned in the previous patch, I don't think the issue this patch
fixes is user visible, however it is an undefined behaviour by
definition and triggers a BUG() in CONFIG_UBSAN builds, hence CC:stable.
v2: Clear the default sink rates, before initializing these for eDP.
Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/4297
References: https://gitlab.freedesktop.org/drm/intel/-/issues/4298
Suggested-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Acked-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20211018143417.1452632-1-imre.deak@intel.com
2021-10-18 14:34:17 +00:00
|
|
|
|
intel_dp_set_default_sink_rates(intel_dp);
|
2021-10-18 09:41:52 +00:00
|
|
|
|
intel_dp_set_default_max_sink_lane_count(intel_dp);
|
2021-09-01 16:03:58 +00:00
|
|
|
|
|
2016-12-14 18:00:23 +00:00
|
|
|
|
if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv))
|
2021-01-20 10:18:33 +00:00
|
|
|
|
intel_dp->pps.active_pipe = vlv_active_pipe(intel_dp);
|
2016-12-14 18:00:23 +00:00
|
|
|
|
|
2023-08-29 11:39:15 +00:00
|
|
|
|
intel_dp_aux_init(intel_dp);
|
2023-10-11 17:16:05 +00:00
|
|
|
|
intel_connector->dp.dsc_decompression_aux = &intel_dp->aux;
|
2023-08-29 11:39:15 +00:00
|
|
|
|
|
drm/i915/dp: conversion to struct drm_device logging macros.
This converts various instances of printk based logging macros in
i915/display/intel_dp.c with the new struct drm_device based logging
macros using the following coccinelle script:
@rule1@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@rule2@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
New checkpatch warnings were fixed manually.
v2: fix merge conflict with new changes in file.
Signed-off-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200122110844.2022-5-wambui.karugax@gmail.com
2020-01-22 11:08:42 +00:00
|
|
|
|
drm_dbg_kms(&dev_priv->drm,
|
|
|
|
|
"Adding %s connector on [ENCODER:%d:%s]\n",
|
|
|
|
|
type == DRM_MODE_CONNECTOR_eDP ? "eDP" : "DP",
|
|
|
|
|
intel_encoder->base.base.id, intel_encoder->base.name);
|
2013-05-08 10:14:08 +00:00
|
|
|
|
|
2023-08-29 11:39:15 +00:00
|
|
|
|
drm_connector_init_with_ddc(dev, connector, &intel_dp_connector_funcs,
|
|
|
|
|
type, &intel_dp->aux.ddc);
|
2009-04-07 23:16:42 +00:00
|
|
|
|
drm_connector_helper_add(connector, &intel_dp_connector_helper_funcs);
|
|
|
|
|
|
2023-01-05 12:41:25 +00:00
|
|
|
|
if (!HAS_GMCH(dev_priv) && DISPLAY_VER(dev_priv) < 12)
|
2017-11-29 18:08:47 +00:00
|
|
|
|
connector->interlace_allowed = true;
|
2009-04-07 23:16:42 +00:00
|
|
|
|
|
2020-02-05 18:35:43 +00:00
|
|
|
|
intel_connector->polled = DRM_CONNECTOR_POLL_HPD;
|
2017-02-22 06:34:26 +00:00
|
|
|
|
|
2010-09-09 15:20:55 +00:00
|
|
|
|
intel_connector_attach_encoder(intel_connector, intel_encoder);
|
2009-04-07 23:16:42 +00:00
|
|
|
|
|
2016-10-13 10:02:52 +00:00
|
|
|
|
if (HAS_DDI(dev_priv))
|
2012-10-26 21:05:51 +00:00
|
|
|
|
intel_connector->get_hw_state = intel_ddi_connector_get_hw_state;
|
|
|
|
|
else
|
|
|
|
|
intel_connector->get_hw_state = intel_connector_get_hw_state;
|
2024-03-11 14:56:26 +00:00
|
|
|
|
intel_connector->sync_state = intel_dp_connector_sync_state;
|
2012-10-26 21:05:51 +00:00
|
|
|
|
|
2014-10-16 18:27:30 +00:00
|
|
|
|
if (!intel_edp_init_connector(intel_dp, intel_connector)) {
|
2015-11-11 18:34:11 +00:00
|
|
|
|
intel_dp_aux_fini(intel_dp);
|
|
|
|
|
goto fail;
|
2013-06-12 20:27:26 +00:00
|
|
|
|
}
|
2009-07-23 17:00:32 +00:00
|
|
|
|
|
2022-06-03 16:58:41 +00:00
|
|
|
|
intel_dp_set_source_rates(intel_dp);
|
|
|
|
|
intel_dp_set_common_rates(intel_dp);
|
|
|
|
|
intel_dp_reset_max_link_params(intel_dp);
|
|
|
|
|
|
|
|
|
|
/* init MST on ports that can support it */
|
|
|
|
|
intel_dp_mst_encoder_init(dig_port,
|
|
|
|
|
intel_connector->base.base.id);
|
|
|
|
|
|
2010-09-19 08:29:33 +00:00
|
|
|
|
intel_dp_add_properties(intel_dp, connector);
|
2018-01-08 19:55:43 +00:00
|
|
|
|
|
2018-01-18 05:48:05 +00:00
|
|
|
|
if (is_hdcp_supported(dev_priv, port) && !intel_dp_is_edp(intel_dp)) {
|
2021-04-27 11:45:20 +00:00
|
|
|
|
int ret = intel_dp_hdcp_init(dig_port, intel_connector);
|
2018-01-08 19:55:43 +00:00
|
|
|
|
if (ret)
|
drm/i915/dp: conversion to struct drm_device logging macros.
This converts various instances of printk based logging macros in
i915/display/intel_dp.c with the new struct drm_device based logging
macros using the following coccinelle script:
@rule1@
identifier fn, T;
@@
fn(...,struct drm_i915_private *T,...) {
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
@rule2@
identifier fn, T;
@@
fn(...) {
...
struct drm_i915_private *T = ...;
<+...
(
-DRM_INFO(
+drm_info(&T->drm,
...)
|
-DRM_ERROR(
+drm_err(&T->drm,
...)
|
-DRM_WARN(
+drm_warn(&T->drm,
...)
|
-DRM_DEBUG(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_KMS(
+drm_dbg_kms(&T->drm,
...)
|
-DRM_DEBUG_DRIVER(
+drm_dbg(&T->drm,
...)
|
-DRM_DEBUG_ATOMIC(
+drm_dbg_atomic(&T->drm,
...)
)
...+>
}
New checkpatch warnings were fixed manually.
v2: fix merge conflict with new changes in file.
Signed-off-by: Wambui Karuga <wambui.karugax@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200122110844.2022-5-wambui.karugax@gmail.com
2020-01-22 11:08:42 +00:00
|
|
|
|
drm_dbg_kms(&dev_priv->drm,
|
|
|
|
|
"HDCP init failed, skipping.\n");
|
2018-01-08 19:55:43 +00:00
|
|
|
|
}
|
2010-09-19 08:29:33 +00:00
|
|
|
|
|
2020-12-18 10:37:17 +00:00
|
|
|
|
intel_dp->frl.is_trained = false;
|
|
|
|
|
intel_dp->frl.trained_rate_gbps = 0;
|
|
|
|
|
|
2021-02-04 13:40:14 +00:00
|
|
|
|
intel_psr_init(intel_dp);
|
|
|
|
|
|
2013-06-12 20:27:25 +00:00
|
|
|
|
return true;
|
2015-11-11 18:34:11 +00:00
|
|
|
|
|
|
|
|
|
fail:
|
2022-12-22 20:18:04 +00:00
|
|
|
|
intel_display_power_flush_work(dev_priv);
|
2015-11-11 18:34:11 +00:00
|
|
|
|
drm_connector_cleanup(connector);
|
|
|
|
|
|
|
|
|
|
return false;
|
2009-04-07 23:16:42 +00:00
|
|
|
|
}
|
2012-10-26 21:05:48 +00:00
|
|
|
|
|
2018-07-05 16:43:52 +00:00
|
|
|
|
void intel_dp_mst_suspend(struct drm_i915_private *dev_priv)
|
2014-05-02 04:02:48 +00:00
|
|
|
|
{
|
2018-07-05 16:43:52 +00:00
|
|
|
|
struct intel_encoder *encoder;
|
|
|
|
|
|
drm/i915: skip display initialization when there is no display
Display features should not be initialized or de-initialized when there
is no display. Skip modeset initialization, output setup, plane, crtc,
encoder, connector registration, display cdclk and rawclk
initialization, display core initialization, etc.
Skip the functionality at as high level as possible, and remove any
redundant checks. If the functionality is conditional to *other* display
checks, do not add more. If the un-initialization has checks for
initialization, do not add more.
We explicitly do not care about any GMCH/VLV/CHV code paths, as they've
always had and will have display.
Reviewed-by: Radhakrishna Sripada <radhakrishna.sripada@intel.com>
Cc: Lucas De Marchi <lucas.demarchi@intel.com>
Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20210408203150.237947-3-jose.souza@intel.com
2021-04-08 20:31:50 +00:00
|
|
|
|
if (!HAS_DISPLAY(dev_priv))
|
|
|
|
|
return;
|
|
|
|
|
|
2018-07-05 16:43:52 +00:00
|
|
|
|
for_each_intel_encoder(&dev_priv->drm, encoder) {
|
|
|
|
|
struct intel_dp *intel_dp;
|
2014-05-02 04:02:48 +00:00
|
|
|
|
|
2018-07-05 16:43:52 +00:00
|
|
|
|
if (encoder->type != INTEL_OUTPUT_DDI)
|
|
|
|
|
continue;
|
2016-06-22 18:57:00 +00:00
|
|
|
|
|
2019-12-04 18:05:43 +00:00
|
|
|
|
intel_dp = enc_to_intel_dp(encoder);
|
2016-06-22 18:57:00 +00:00
|
|
|
|
|
2021-10-06 10:16:18 +00:00
|
|
|
|
if (!intel_dp_mst_source_support(intel_dp))
|
2014-05-02 04:02:48 +00:00
|
|
|
|
continue;
|
|
|
|
|
|
2018-07-05 16:43:52 +00:00
|
|
|
|
if (intel_dp->is_mst)
|
|
|
|
|
drm_dp_mst_topology_mgr_suspend(&intel_dp->mst_mgr);
|
2014-05-02 04:02:48 +00:00
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2018-07-05 16:43:52 +00:00
|
|
|
|
void intel_dp_mst_resume(struct drm_i915_private *dev_priv)
|
2014-05-02 04:02:48 +00:00
|
|
|
|
{
|
2018-07-05 16:43:52 +00:00
|
|
|
|
struct intel_encoder *encoder;
|
2014-05-02 04:02:48 +00:00
|
|
|
|
|
drm/i915: skip display initialization when there is no display
Display features should not be initialized or de-initialized when there
is no display. Skip modeset initialization, output setup, plane, crtc,
encoder, connector registration, display cdclk and rawclk
initialization, display core initialization, etc.
Skip the functionality at as high level as possible, and remove any
redundant checks. If the functionality is conditional to *other* display
checks, do not add more. If the un-initialization has checks for
initialization, do not add more.
We explicitly do not care about any GMCH/VLV/CHV code paths, as they've
always had and will have display.
Reviewed-by: Radhakrishna Sripada <radhakrishna.sripada@intel.com>
Cc: Lucas De Marchi <lucas.demarchi@intel.com>
Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20210408203150.237947-3-jose.souza@intel.com
2021-04-08 20:31:50 +00:00
|
|
|
|
if (!HAS_DISPLAY(dev_priv))
|
|
|
|
|
return;
|
|
|
|
|
|
2018-07-05 16:43:52 +00:00
|
|
|
|
for_each_intel_encoder(&dev_priv->drm, encoder) {
|
|
|
|
|
struct intel_dp *intel_dp;
|
2016-06-22 18:57:00 +00:00
|
|
|
|
int ret;
|
2014-05-02 04:02:48 +00:00
|
|
|
|
|
2018-07-05 16:43:52 +00:00
|
|
|
|
if (encoder->type != INTEL_OUTPUT_DDI)
|
|
|
|
|
continue;
|
|
|
|
|
|
2019-12-04 18:05:43 +00:00
|
|
|
|
intel_dp = enc_to_intel_dp(encoder);
|
2018-07-05 16:43:52 +00:00
|
|
|
|
|
2021-10-06 10:16:18 +00:00
|
|
|
|
if (!intel_dp_mst_source_support(intel_dp))
|
2016-06-22 18:57:00 +00:00
|
|
|
|
continue;
|
2014-05-02 04:02:48 +00:00
|
|
|
|
|
drm/dp_mst: Add basic topology reprobing when resuming
Finally! For a very long time, our MST helpers have had one very
annoying issue: They don't know how to reprobe the topology state when
coming out of suspend. This means that if a user has a machine connected
to an MST topology and decides to suspend their machine, we lose all
topology changes that happened during that period. That can be a big
problem if the machine was connected to a different topology on the same
port before resuming, as we won't bother reprobing any of the ports and
likely cause the user's monitors not to come back up as expected.
So, we start fixing this by teaching our MST helpers how to reprobe the
link addresses of each connected topology when resuming. As it turns
out, the behavior that we want here is identical to the behavior we want
when initially probing a newly connected MST topology, with a couple of
important differences:
- We need to be more careful about handling the potential races between
events from the MST hub that could change the topology state as we're
performing the link address reprobe
- We need to be more careful about handling unlikely state changes on
ports - such as an input port turning into an output port, something
that would be far more likely to happen in situations like the MST hub
we're connected to being changed while we're suspend
Both of which have been solved by previous commits. That leaves one
requirement:
- We need to prune any MST ports in our in-memory topology state that
were present when suspending, but have not appeared in the post-resume
link address response from their parent branch device
Which we can now handle in this commit by modifying
drm_dp_send_link_address(). We then introduce suspend/resume reprobing
by introducing drm_dp_mst_topology_mgr_invalidate_mstb(), which we call
in drm_dp_mst_topology_mgr_suspend() to traverse the in-memory topology
state to indicate that each mstb needs it's link address resent and PBN
resources reprobed.
On resume, we start back up &mgr->work and have it reprobe the topology
in the same way we would on a hotplug, removing any leftover ports that
no longer appear in the topology state.
Changes since v4:
* Split indenting changes in drm_dp_mst_topology_mgr_resume() into a
separate patch
* Only fire hotplugs when something has actually changed after a link
address probe
* Don't try to change port->connector at all on ports, just throw out
ports that need their connectors removed to make things easier.
Cc: Juston Li <juston.li@intel.com>
Cc: Imre Deak <imre.deak@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Harry Wentland <hwentlan@amd.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Sean Paul <sean@poorly.run>
Signed-off-by: Lyude Paul <lyude@redhat.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191022023641.8026-14-lyude@redhat.com
2019-06-17 23:57:33 +00:00
|
|
|
|
ret = drm_dp_mst_topology_mgr_resume(&intel_dp->mst_mgr,
|
|
|
|
|
true);
|
2019-01-29 19:10:00 +00:00
|
|
|
|
if (ret) {
|
|
|
|
|
intel_dp->is_mst = false;
|
|
|
|
|
drm_dp_mst_topology_mgr_set_mst(&intel_dp->mst_mgr,
|
|
|
|
|
false);
|
|
|
|
|
}
|
2014-05-02 04:02:48 +00:00
|
|
|
|
}
|
|
|
|
|
}
|