Figure out whether there's a way to catch high-impact security bugs in automation |
|||
Issue descriptionAfter a recent accidental leak, the idea came up to see whether we can identify high-impact bugs in sheriffbot automation and require manual approval before publishing them. This bug is intended to record thoughts and discussion around this.
,
Nov 13 2017
Quick question given signal #2: Are we interested in catching high-impact bugs that may be mis-categorized (not already labeled "Security" related) or are we more interested in bugs that are already "Security" labeled but having some automated process for detecting when those bugs might be high-enough-impact to mandate manual approval (i.e., missing Security_Severity labels)? (Or perhaps some combination of both?)
,
Nov 17 2017
,
Nov 17 2017
To throw a couple ideas on the wall: Do we have a way to find all bugs that were Restrict-View-Security, then made public, and then re-restricted? I'm not sure if this is feasible with the search functionality though. If we had a set of true-positives to look at, a set of mutual heuristics might be clearer and allow us to tweak how "noisy" the manual approvals are. (I'm imagining something vaguely inspired by the anomaly scoring system of Ho et al https://www.usenix.org/conference/usenixsecurity17/technical-sessions/presentation/ho). (On a related note: An archive of the issue tracker in something like BigQuery could be very useful for this kind of stuff, but we might have to be careful to not have it be another venue for leaking non-public issues.) |
|||
►
Sign in to add a comment |
|||
Comment 1 by mnissler@chromium.org
, Nov 13 2017