Telemetry doesn't distinguish between Android Chrome & Webview in fieldtrial_testing_config.json |
|||||||||
Issue descriptionThis has lead to issue 879151
,
Sep 6
Yeah, that should have been fixed by https://crrev.com/c/1114382. I tested it just now with some logs in components/variations/field_trial_config/field_trial_util.cc, and it seems to work: ChooseExperiment only calls AssociateParamsFromExperiment for the 4 studies in fieldtrial_testing_config.json which specify "android_webview", which doesn't include SurfaceSynchronization. Do you have some circumstance where this is not the case?
,
Sep 6
Hmhh, it's not the case for Telemetry test. Fady: werent' you added SurfaceSynchronization for into fieldtrial_testing_config.cc, can you answer paulmiller@'s question?
,
Sep 11
,
Nov 15
I'm not sure if I'm the right person to be assigned this bug. Passing to asvitkine@ for finding the right person.
,
Nov 15
paul: can you take a look given you've added support to implement part of this?
,
Jan 8
It looks like fieldtrial_testing_config.json is not actually used, because the test sets all its own features with --force-fieldtrials. WebView is correctly omitted from SurfaceSynchronization in both fieldtrial_testing_config.json and the server-side GCL config, but that didn't matter, because the test was explicitly forcing it on. And the test was already fixed on the other bug, so there's nothing to do here.
,
Jan 10
As far as I understand fieldtrial_testing_config.json *is* used to set the value of --force-fieldtrials that is passed to Telemetry. If I'm reading your comment correctly, it should then be possible now to remove these hacks we introduced to silence the error here: https://cs.chromium.org/search/?q=crbug.com/881469 ? If that sounds right please assign back to me to finish that cleanup work.
,
Jan 10
You're right. Still, fieldtrial_testing_config.json does actually distinguish WebView. But fieldtrial_util.GenerateArgs is getting called with platforms=['android'], when the script was invoked with ‑‑browser=android‑webview‑google. What's the reason for parsing fieldtrial_testing_config.json in the perf script? Chrome will apply the test config itself if you don't use --force-fieldtrials (and fieldtrial_testing_enabled is true in components/variations/service/BUILD.gn).
,
Jan 10
I think maybe the separate Python logic to produce the --force-fieldtrials command line is needed to make perf bots that run Google Chrome builds respect the testing config? I agree that for Chromium builds, it should already be covered via the C++ generated config. However, if that's the only reason, then one option would be to change the Google Chrome set up to also include the C++ config, but not use it unless a command-line flag is passed. Then, perf bots could just pass that flag.
,
Jan 10
Yeah I was thinking the same :) but I wonder if there's some special reason for parsing the config manually that we don't want to break.
,
Jan 10
If I recall the history correctly, I think the Python + command-line logic was written first when we just made it affect perf bots and later when we wanted browser tests support, the C++ code was added. So I think it's a historical artifact due to the order the two were added.
,
Jan 11
Got it. I'll re-purpose this bug to fix the Telemetry issue described in #9. Possibly the simple fix is to use the correct platform value for WebView browser. Also don't know much of the history of why Telemetry does things this way. We may also want to revisit that, it would certainly be nice if we don't have all these extra command line options (which on android makes the command line file huge). +crouleau FYI
,
Jan 11
Agreed that we don't want that many commandline options, but Telemetry can't do anything about that because that giant list is what the Finch team provides for us (via a script that we run to select those IIRC). Without that giant list, Chrome has nondeterministic behavior. We could have someone do an investigation and figure out if we should request some other solution from Finch team, but that would be a good bit of work.
,
Jan 11
> that giant list is what the Finch team provides for us (via a script that we run to select those IIRC). Are you referring to fieldtrial_testing_config.json, which is parsed in perf_benchmark.py in _GetVariationsBrowserArgs? That's the mechanism we're talking about removing, because Chrome can apply those settings itself. > Without that giant list, Chrome has nondeterministic behavior. How so? Do you mean, because Chrome will use the real, downloaded Finch configs? We could add an option to avoid that.
,
Jan 11
Yeah, that's what I'm talking about. Sounds awesome if we can replace it! How does Chrome apply those settings itself? Yeah, we need the tests to be fully hermetic (single-machine or in the case of Android, single android device and host) and depend only on the input isolate binaries, not on any external server. #10 > However, if that's the only reason, then one option would be to change the Google Chrome set up to also include the C++ config, but not use it unless a command-line flag is passed. Then, perf bots could just pass that flag. Yes! This is what we want! Let's not do #9 since I'm pretty sure #10's comment: "I think maybe the separate Python logic to produce the --force-fieldtrials command line is needed to make perf bots that run Google Chrome builds respect the testing config?" is correct. We need to do this work so carefully because if we break things we won't find out that we broke them right away and once we figure it out we will have ruined months worth of chromeperf dashboard data and made Pinpoint jobs worthless. Please add me to review any work. It would also be great for someone to make a short one-page design doc and get sign off if possible. |
|||||||||
►
Sign in to add a comment |
|||||||||
Comment 1 by asvitk...@chromium.org
, Sep 6