Ensure cached search provider results are expired once fetchers aren't running |
|||||
Issue descriptionSearchProvider caches server-provided results between runs. It doesn't make any checks as to whether the results still begin with the new input string (though I thought it did once upon a time?). This seems unnecessary given that AutocompleteResult also caches matches across runs to reduce flicker. It can also lead to bugs; for example, I typed "fac" to trigger several facebook-related suggestions, then after a pause, rapidly and steadily typed garbage keys until I filled my whole address bar. The facebook suggestions persisted even after I stopped typing; presumably my query caused the server to return a result that ParseSuggestResults() failed to parse, so no new results were ever considered to be available, so the old ones were never expired. We could try to fix the bugs, but why bother? I'd just aim to remove this. Note that this has been parametrized under a field trial (added in bug 420903 ) for years; presumably we have some data to show whether it helps at all. If so, it'd be good to know why.
,
May 23 2017
Your memory is right in that years ago we used to only cache results that were matches again the user's (new) input string. This caused visual jank (see bug 173915 ), which we fixed by being less aggressive at removing stale results. After more discussion (see bug 309465 ), we then went further, making things even less aggressive and also cleaner. (The legal default match code make the logic simpler if I recall correctly.) We then added even more stale to this logic to be smarter about preserving inline autocompletions and not changing things asynchronously. See bug 398135. We need to keep the cached state for the fixes for bug 398135. We don't keep any additional state other than needed for that bug fix. Thus, I think there's nothing to remove. If you care enough, you may want to read through the bug threads, figure out why the AutocompleteResult-level caching didn't solve the issues in bug 173915 (maybe the timeouts were wrong?), and whether you can find an alternate solution to bug 398135 that doesn't involve caching or involves caching less or somehow has simpler code. For now, I'm going to trust that we made good, thoughtful decisions about this in the past. :-)
,
May 23 2017
Is it possible someone's going to look at the code in the future and ask again about this? Can we add a comment referencing this discussion for future archaeologists? Comment 2 has a ton of information that seems like it may be useful again someday.
,
May 23 2017
Thanks for the praise of my comment. I think at this point the primary motivation for caching is related to asynchronous updates (re: bug bug 398135), which is commented in the code https://cs.chromium.org/chromium/src/components/omnibox/browser/search_provider.cc?l=818-824 https://cs.chromium.org/chromium/src/components/omnibox/browser/search_suggestion_parser.h?l=110-113 and which Peter discovered (see comment #1). If you can find a good place to add additional comments or expand those, I'm happy to review them. I'm not sure commenting about the earlier history is necessary. What would it say? // If SearchProvider didn't cache in order to prevent asynchronous updates, then it still may want to cache for reasons ... That seems like a counterfactual that's not going to happen, so I'm not sure it would help to have that in understanding the code. Nonetheless, again happy to review comment changes if you send them to me.
,
May 24 2017
The way the caching worked and its purpose was really unclear in the code -- it took me a couple hours of looking at things to fully understand it, which is also why I didn't include comment 1 here in the root post (because I didn't know it when I filed the bug). Having history as the motivation for why we don't clear these more aggressively would also be useful. So definitely some clarity would help. If necessary, I could probably try to wordsmith something, but I'm unlikely to do so unless someone prods me. (Yay laziness?) With more thought, I don't think a code change that involves caching less (initially) would work as well, but I do think more aggressive clearing might be feasible. However, the only thing I think we really _should_ do, which I'm reopening this bug for (since it's what made me file it to begin with), is guarantee that we eventually do expire these matches when we know they won't be replaced by up-to-date ones. Today, it's possible to bypass this, which leaves you with wrong suggestions that don't go away (until you type more). This is clearly P3.
,
May 24 2017
I still agree better comments would be useful. I'm not sure about the current morph of this bug. I think in places with poor network connectivity, it would be good to keep the cached suggestions if the request fails. (By the way, I cannot reproduce your original complaint, though as you say it may be because of a flaky server issue.) We only clear the search suggestion cache on new omnibox interactions (new focuses).
,
May 24 2017
The idea wouldn't be to unconditionally purge results, but to purge things that don't match the current input. Sort of like we used to do, but much less aggressively (only when we're not going to replace them with anything else). Here's a way I reproduced this stuff: type "fac"; hold down the period key for five seconds; then type more characters. The suggestions remain constant, but probably shouldn't.
,
May 24 2017
It's hard to detect "doesn't match the current input", as many times the suggestions are spelling corrections of what you typed. I agree that it would be useful to have something like what you say; I'm just not sure how it can be done in a way that helps real users real inputs without eliminating suggestions that are still appropriate. A prefix match does not suffice. Edit distance might get you somewhere, though that feels excessive (run edit distance on all suggestions for every input, drop if >50%?).
,
May 24 2017
Since I'm only proposing dropping any suggestions in the case where the suggestions we have are from a previous run and the server is never going to send us anything for the current run, I think simple things like prefix matching are OK.
,
May 24 2017
I thought your concern was 1. type something 2. type more 3. fetch fails In this case, we're not going to get more suggestions, but the existing ones may still be useful. I don't think prefix matching would suffice. See, for examples, the situations mentioned in bug 173915 .
,
May 25 2017
Which situations? That bug seems to only contain a couple sample queries, where it naively looks like maybe prefix-matching would be OK. I don't actually know what the "step 3" here is. Why does the case in comment 7 reproduce? I didn't look into it enough to know if "fetch fails" is the issue.
,
May 25 2017
One example: prefix matching for any multi-word query will remove all navsuggestions. And there are a lot of multi-word navigational queries: new york times, museum of modern art, jc penney, etc. Prefix matching will also fail for any spelling corrections, e.g., musum of modern art will lose the suggestion to museum of modern art.
,
May 25 2017
But like I said, only in the case where we don't have any up to date matches and the server won't be providing any. I'm probably fine with that case, depending on what's actually causing the "won't provide any" I describe above. I'm explicitly not suggesting we return to the world of bug 173915 .
,
May 25 2018
This issue has been Available for over a year. If it's no longer important or seems unlikely to be fixed, please consider closing it out. If it is important, please re-triage the issue. Sorry for the inconvenience if the bug really should have been left as Available. For more details visit https://www.chromium.org/issue-tracking/autotriage - Your friendly Sheriffbot
,
May 31 2018
grabbing but not starting since this overlaps with upcoming document results and fuzzy matching features. |
|||||
►
Sign in to add a comment |
|||||
Comment 1 by pkasting@chromium.org
, May 23 2017