New issue
Advanced search Search tips
Note: Color blocks (like or ) mean that a user may not be available. Tooltip shows the reason.

Issue 767893 link

Starred by 2 users

Issue metadata

Status: Assigned
Owner:
Cc:
Components:
EstimatedDays: ----
NextAction: ----
OS: Chrome
Pri: 2
Type: Bug



Sign in to add a comment

we should plumb link failure notifications

Project Member Reported by marc...@chromium.org, Sep 22 2017

Issue description

In upstream kernels, we have support for DP link failure notifications. We should plumb this in chrome.
 
Components: UI>Shell>MultipleMonitor
Labels: OS-Chrome
Can you please provide more details?
Basically, while displayport is running, there can be a loss of link which we can actually (sometimes) detect. The kernel will pick it up, and will set DRM_MODE_LINK_STATUS_BAD on the "link-status" property of the connector. Then the kernel will send a hotplug event, which is what Chrome sees.

The missing bit is that chrome needs to check for DRM_MODE_LINK_STATUS_BAD on the link-status property, and if so redo a full modeset. Right now it doesn't because it sees that there are no displays added/removed.

(Sean, correct me if I missed something)
Labels: M-65
Owner: osh...@chromium.org
Status: Assigned (was: Untriaged)
afakhry@ Marcheu and I talked about this and I agree it's worth doing - essentially users will be notified if the cable is faulty. assigning to Oshima for now and targeting M65 tentatively.
@c2: yep, exactly.

It's also worth noting that the kernel will prune the mode which produced the bad link. So on failure, chrome should first get a new list of modes and then do a full modeset with whichever leftover mode is preferred. 
@4: Hmm, I didn't realize that the kernel prunes the mode. That doesn't necessarily sound like a good idea; for DP the mode doesn't have much to do with the quality of the transport. What's the underlying reason?
@5: Yeah, I wasn't sold on this either (and, well, I wanted to handle the whole thing in the kernel, but alas). 

I asked Masani Navare on IRC, here's what she said:

2:45 PM <seanpaul> mdnavare: around? marcheu was wondering why we need to prune modes when link fails. IIRC, that decision is driven by DP compliance?

2:45 PM <mdnavare> seanpaul: Hi
2:46 PM <mdnavare> seanpaul: You mean pruning of modes that happens in i915 in mode_valid?

2:47 PM <seanpaul> mdnavare: i might just be misremembering (and haven't checked code). i thought that we pruned the current mode when the bad link status property is flipped

2:47 PM <mdnavare> seanpaul: We have to prune the mode if the fallback link rate/lane count cannot handle the requested link BW for a particular mode

2:47 PM <seanpaul> ok, so the mode is preserved in get_modes if the link fails, but we can fall back to another link rate/lane count that will support it? 

2:49 PM <mdnavare> seanpaul: Yes since after bad link status, we lower the link rate/lane count, now the available BW gets lower and we gotto check if thats enough for the current mode if not then we change the current mode_status to CLOCK_HIGH and then it gets pruned before it returns the modelist to the userspace

2:49 PM <seanpaul> mdnavare: cool! thanks

2:49 PM <mdnavare> seanpaul: Look at drm/i915/intel_dp.c: intel_dp_mode_valid()

Sooo... it sounds like we don't drop the modes on link failure? That would be great!
Yeah, I should have really added a tl;dr to that. Apologies.

The modes are only pruned when we can't produce the mode at the current rate/lanes. So we should still regenerate the mode list on bad link in case the mode has been pruned, but it won't prune if there's still a chance it can work.
Labels: -Pri-3 Pri-2
Owner: warx@chromium.org
Labels: -M-65 M-69

Comment 12 by warx@chromium.org, Jun 20 2018

Owner: ovanieva@chromium.org
Labels: -M-69 M-72

Sign in to add a comment