NetworkQualityEstimator memory leak |
||||||
Issue descriptionNQE assumes that URLRequestHttpJob::NotifyURLRequestDestroyed is called for every URLRequest it sees. Currently, this is not the case. This method is only called when URLRequests are destroyed when they: 1) Have a non-NULL URLRequestJob at the point of destruction (Not guaranteed) 2) That URLRequestJob is a URLRequestHttpJob (Not guaranteed. ServiceWorker, AppCache, and even extensions can intercept requests, resulting in different URLRequestJob types). Note that these can all happen after the URLRequest has started a HttpTransactions, been redirected (Or whatever), and then started another URLrequest of a type that does not use HttpTransactions, which seems to pretty clearly lead to problems. I don't think the ThroughpoutAnalyzer should be tracking requests at all. Beyond the fact that tracking requests has historically proven problematic (See, for example, extensions), it creates circular dependencies and layering issues.
,
Nov 3 2016
As a first pass fix, maybe track URLRequestJobs instead of URLRequests, and move all NQE calls into URLRequestJob from URLRequestHttpJob? Or possibly move them all into URLRequestHttpJob from URLRequestJob? Still thinking tracking them isn't the best way to go, but don't want to delve into it enough to suggest a complete solution. Having said that, one option is to have some sort of NQEPerRequestData field, and make it a member of the URLRequestJob itself. Not sure if you need to be able to peer into URLRequests when they're not calling into the NQE, though, so maybe not a viable option. Could alternatively move the stuff into URLRequest, but I think scoping things to a URLRequestJob makes more sense.
,
Nov 8 2016
There's another bug here: Requests are added to the NQE's list in NetworkQualityEstimator::NotifyStartTransaction, unconditionally. But this method can be called multiple times for the same URLRequestJob, without intervening calls that remove the request from the list.
,
Jan 14 2017
Thanks for filing this bug. Right now, throughput analyzer gets notified from 3 different methods: (1) From URLRequestHttpJob::StartTransactionInternal(): This results in request getting added to a map. (2) URLRequestHttpJob::NotifyURLRequestDestroyed(): Kicks the request out of the map. (3) URLRequestHttpJob::DoneWithRequest(CompletionCause reason): This also kicks the request out of the map. Note that all the calls are already from URLRequestHttpJob. mmenke: I think there is some confusion because of a 4th NQE call from URLRequestJob::RecordBytesRead(). This call is not propagated to throughput analyzer, so this call actually has no effect on tracking of the requests in the map. My understanding is that in the UrlRequestHttpJob's destructor, DoneWithRequest() is called, and so the entry should always be kicked out of the map. Also, Re#3: Since it is a map keyed by URLRequest ptr, I believe subsequent insertions (e.g., request results in multiple transactions) should not have any effect on the correctness or memory usage. Although, NQE can save on some computations. Thoughts?
,
Jan 14 2017
,
Jan 27 2017
mmenke, Does #4 answer all the questions? If yes, I am inclined to close this as "WontFix".
,
Jan 27 2017
Sorry, forgot to look at this again. it looks to me like you're right. I think I missed the DoneWithRequest path. With it, do we even need the NotifyURLRequestDestroyed path?
,
Jan 27 2017
Probably not, but I will check if I can remove it. Thanks for the suggestion.
,
Feb 13 2017
,
Feb 14 2017
,
Mar 1 2017
|
||||||
►
Sign in to add a comment |
||||||
Comment 1 by tbansal@chromium.org
, Nov 3 2016Status: Assigned (was: Untriaged)