A customer is reporting that ~44% of calls to estimate() take longer than 100ms to resolve.
We should look into whether there are optimization opportunities. In theory, it should be a hop from the renderer to the browser, making a database query (cached in memory?), and responding via a microtask.
The biggest time sink might be that we're hitting disk to measure actual usage for the origin (if we don't trust the DB value) on demand. Or maybe memory caches aren't warmed, or we're doing something dumb somewhere. Even if we need to hit disk we could kick off an async task to do so earlier (e.g. on navigation).