New issue
Advanced search Search tips
Note: Color blocks (like or ) mean that a user may not be available. Tooltip shows the reason.

Issue 784390 link

Starred by 1 user

Issue metadata

Status: Started
Owner:
Cc:
Components:
EstimatedDays: ----
NextAction: ----
OS: ----
Pri: 3
Type: Task



Sign in to add a comment

Figure out whether there's a way to catch high-impact security bugs in automation

Project Member Reported by mnissler@chromium.org, Nov 13 2017

Issue description

After a recent accidental leak, the idea came up to see whether we can identify high-impact bugs in sheriffbot automation and require manual approval before publishing them. This bug is intended to record thoughts and discussion around this.
 
Cc: jpm@google.com
If we want to do this, we need a reasonable way to identify high-impact bugs that satisfies these properties:
1. Based on existing information (i.e. bug labels), adding more labels won't fly due to the difficulty to get people to use them and complicating the process.
2. Low false-positive rate, otherwise we'd be losing the benefits of automation
3. Reasonably low false-negative, otherwise the whole exercise is pointless.

Existing signals I'm aware that we may consider:

1. Priority. We could require manual review for all P0 bugs. This wouldn't have kicked in on  issue 722261  though which was high-impact but not critical urgency since the response work took place across several months.

2. Security labels. The only label that makes sense to trigger this is Restrict-View-SecurityEmbargo. Sheriffbot already skips issues marked with that label, so nothing to improve here.

3. Restrict-View-Google. Probably used too frequently and would cause a significant number of false positives.

4. Some heuristics on CC list: Bugs with lots of @google.com CCs are more likely to be sensitive. Maybe we could consider requiring manual review for bugs with more than 5 @google.com CCs?

5. Figuring out from comment history whether the bug affects external entities (partners, upstream projects, etc.)? Hard to do reliably without significant investment. Also highest-impact security bugs tend to omit partner information as a measure of caution.


palmer@, jorgelo@ any other ideas? IMHO, of the above only #4 makes some sense (even though I'm not really convinced this'll work in pratice).

Comment 2 by cthomp@chromium.org, Nov 13 2017

Cc: cthomp@chromium.org
Quick question given signal #2: Are we interested in catching high-impact bugs that may be mis-categorized (not already labeled "Security" related) or are we more interested in bugs that are already "Security" labeled but having some automated process for detecting when those bugs might be high-enough-impact to mandate manual approval (i.e., missing Security_Severity labels)?

(Or perhaps some combination of both?)

Comment 3 by awhalley@google.com, Nov 17 2017

Cc: awhalley@chromium.org

Comment 4 by cthomp@chromium.org, Nov 17 2017

To throw a couple ideas on the wall:

Do we have a way to find all bugs that were Restrict-View-Security, then made public, and then re-restricted? I'm not sure if this is feasible with the search functionality though. If we had a set of true-positives to look at, a set of mutual heuristics might be clearer and allow us to tweak how "noisy" the manual approvals are. (I'm imagining something vaguely inspired by the anomaly scoring system of Ho et al https://www.usenix.org/conference/usenixsecurity17/technical-sessions/presentation/ho).

(On a related note: An archive of the issue tracker in something like BigQuery could be very useful for this kind of stuff, but we might have to be careful to not have it be another venue for leaking non-public issues.)

Sign in to add a comment