Consider having finer TaskPriority levels |
|||||||||
Issue descriptioneroman@ wrote at https://codereview.chromium.org/2077413009/diff/20001/net/proxy/dhcp_proxy_script_fetcher_win.cc#newcode63 """ As an aside, I wonder if just these 3 priority classes will be sufficiently granular to make good scheduling choices. Here for instance after the transition we could just be firing off all the tasks with ExecutionMode::PARALLEL and priority=USER_VISIBLE. That is fine, but in theory we have more information available. Some of the tasks (adapter probes) are less important than others and could be de-prioritized. But not so unimportant as to become BACKGROUND. This particular case isn't all that interesting (wasn't really optimized before), but still curious if there are plans to have more priorities or other ways to describe precedence of tasks than those 3. """ The goal of having only 3 priorities was that anything beyond that was thought to lead to arbitrary choices from different components which could pick different priorities for the same use case. Something we could do though is define TaskPriority as enum (instead of an enum class) and have something like: BACKGROUND = 0, USER_VISIBLE = 100, USER_BLOCKING = 200, and let components send tasks with say USER_VISIBLE +/- X. Components should only use this to highlight relative priority of their own tasks, not to try to coordinate with other components' priorities (e.g. USER_VISIBLE - 1 would mean it's a foreground task but matters less than the typical foreground task from that component's POV). Note: given enough work this could result in starvation (today's model also can starve USER_VISIBLE work but the thought is that the volume of USER_BLOCKING work should not starve 100% of resources for long periods of time).
,
Oct 24 2016
FTR, here's an interesting paper for a solution to the starvation problem: https://www.researchgate.net/publication/260038165_STARVATION-PROOF_PRIORITY_ROUND-ROBIN_QUEUES_FOR_TIME-SHARING_SYSTEMS tl;dr; each priority level is responsible to run a task from the priority level below if it ran k tasks in a row and still isn't empty, which means that even if priority level N is flooded, priority level M < N will still run tasks at a rate of (1/k)^(N-M).
,
Oct 24 2016
This is effectively a bit like an aging strategy (where tasks slowly gain priority while not running) but avoids the flipside of that where you get a low priority task flood when a batch of tasks posted at the same time all have aged enough to be promoted. Instead in this case, the levels are "aging" by being starved and the higher level causing the starvation yields to them for one task when exceeding their quota.
,
Oct 24 2016
If we expect starvation to be rare, then this would likely suffice. I worry a bit about what happens to n-4 or greater situations. If we're willing to do without determinism, we could probabilistically choose the next task, with higher priority tasks having higher likelihood of being chosen. Then you don't need to keep track of the counts and you can provide a probabilistically minimum rate.
,
Oct 24 2016
#4 would increase the runtime complexity of choosing a task as every pending task (well sequence but wtv) is a candidate, making this O(n). You could reduce complexity by only randomly considering every level's queue instead of every task, which comes out to the same thing I think except that #2 guarantees 1/k whereas a probability of 1/k would only be a probabilistic guarantee.
,
Oct 25 2016
You're right, worst case it's O(n) where n is the number of priorities. The worst case for the 1/k method is also O(n), though less common. Given that the PRNG isn't free, I'd go with your proposed method.
,
Oct 25 2016
I wonder if we should look at task runtimes instead of task counts? At least in the renderer task sizes vary by several orders of magnitude, so you can still get a fair bit of starvation even though you just ran "one" task.
,
Oct 25 2016
What do you mean by "look at task runtimes"? Schedule based on how long a task is going to take? We did hand-wavingly discuss trying to come up with heuristic (and even ML!) to do scheduling based on expected time-to-completion.
,
Oct 26 2016
Right, expected time to completion is hard. I was more thinking about looking at task queueing delays and trying to prioritize tasks that have been waiting for a long time.
,
Oct 27 2016
Ah, this is already the case for tasks on the same priority level. Otherwise the technique you're referring to is known as "aging". The problem is that a batch of low priority tasks posted a while ago can all be promoted at once, flooding the high priority queue.
,
Oct 27 2016
Yep, that's true. I'm not sure what solution we want here. FWIW the Blink scheduler uses a fixed 5:1 RR ratio between high and normal priority tasks and it seems to be working okay enough for the workloads we've looked at.
,
Oct 27 2016
Okay, so #2 with k = 5?
,
Oct 28 2016
Pretty much yeah.
,
Nov 18 2016
,
Mar 29 2017
,
Mar 29 2017
Intentionally set to "Available" == I'm the POC but not working on this right now.
,
Apr 12 2018
This issue has been Available for over a year. If it's no longer important or seems unlikely to be fixed, please consider closing it out. If it is important, please re-triage the issue. Sorry for the inconvenience if the bug really should have been left as Available. For more details visit https://www.chromium.org/issue-tracking/autotriage - Your friendly Sheriffbot
,
Apr 12 2018
,
May 9 2018
,
Aug 1
|
|||||||||
►
Sign in to add a comment |
|||||||||
Comment 1 by gab@chromium.org
, Jun 27 2016