|Issue 536620||IndexedDB blocks DOM rendering|
|Starred by 23 users||Reported by nolan.la...@gmail.com, Sep 27 2015||Back to list|
Sep 27 2015,
Issue filed on Firefox: https://bugzilla.mozilla.org/show_bug.cgi?id=1208840
Sep 28 2015,
Sep 28 2015,
Are you sure Safari/Edge behave differently if it's a JS animation? I believe both of those browsers handle GIF animations off the main thread, so it's not quite an apples-to-apples comparison. And yes... event delivery happens on the main thread, so the IDB event dispatch (of the successful puts) hammers the main thread. We've discussed a couple of possibilities here, including a "write only" transaction that doesn't bother with individual success events and implicitly commits immediately, or a "putAll" batch operation.
Sep 28 2015,
Sep 28 2015,
Sep 28 2015,
Yeah, JS-based animation would be apples-to-apples. There are some thoughts here: https://gist.github.com/inexorabletash/d55a6669a040e92e47c6 Basically there are two reasons we might consider "write-only" transactions. (1) every 'success' event has an internal overhead even if there's no handler. Even if measured in microseconds, if you're doing 100k put calls that can add up to a noticeable impact on the main thread. (2) the transaction can't attempt to commit until all of the 'success' events have been delivered since the last one might attempt to queue more work in the tx; this adds measurable overhead to each transaction. A "write-only" transaction that doesn't deliver success/fail events on individual operations would avoid both of those. putAll() would address #1 by coalescing e.g. 100k events into just one event, but wouldn't help with #2. We may be able to be more clever about reducing the overhead when there's no listener attached. (It's subtle, though - when exactly do we check to see if an event handler is attached?) Anyway, definitely keeping this open.
Sep 29 2015,
Oct 6 2015,
OK, apparently the structured cloning is not the culprit. @calvinmetcalf added another web worker test using cloned objects (i.e. not creating the objects within the worker itself), and it still blocks less than straight IDB: http://nolanlawson.github.io/database-comparison/
Oct 6 2015,
Thanks. Looking at nolan's perf numbers for Safari/Edge (which don't block) the transactions take an order of magnitude more time to commit. I think they're likely deprioritizing the database event delivery, something we've talked about. We've had a "to-do" for a while to explore integrating with the blink scheduler to defer IDB event processing - let's track that here.
Oct 9 2015,
I should note that it's not just the events that cause overhead - the put() calls themselves must do a little synchronous processing (serializing the data, etc) but that's more obvious and less avoidable.
Oct 9 2015,
I was wrong - the event overhead is negligible. The bulk of the time that Kirby pauses is due to the put() calls. After making some speculative changes to event processing (with no benefits) I added some debug spew and realized it was all in the puts. I used chrome://tracing to view this (nolan - have you tried that out? if not, ping me offline and I'll walk you through it). I also added some deeper tracing to the phases of the put and it looks like the time breaks down as follows: For 1000 records (to keep the tracing results sane): Page reports 141ms end-to-end. The script function that makes the put calls has a wall-clock duration of 64ms; the back-end transaction commit and flush to disk takes 53ms and there's a little bit lost in IPC. Even with more records the commit time remains small, so it ends up being the put() calls dominating. Of the 64ms, the put()s consume 57ms of that. Breaking that down: * 15ms is spent serializing * 15ms is spent extracting the inline key, 9ms of which is deserializing * 15ms is spent setting up the async IPC call into the backend The remaining time (~10ms) is split into little operations (validating inputs, checking to see if index keys need to be computed, computing the wire format, creating the result IDBRequest, etc). Unfortunately, all of those things need to occur synchronously with respect to script to enforce constraints (i.e. they may throw), so we can't defer them. We need to look into optimizing the individual steps. We know that serialization/deserialization is slow. It's actually faster to JSON.stringify() then postMessage() a string than to postMessage() an object. :(
Oct 10 2015,
Nope, never used chrome://tracing. If you have a good tutorial or something you'd recommend, I'd love to try it. :) That's really odd that serialization eats up so much. With WorkerPouch, I ran tests  comparing serializing exactly the same object and then posting to a worker (no JSON.parse or stringify) and it definitely blocks the DOM less. Maybe due to the fact that PouchDB'S internal IDB representation of objects is larger than the external one? That database-comparison app also has a test with serializing the IDB objects themselves ("worker with cloning"), and it's still less blocking. If the answer is to just use workers, I'm fine with that, but it'd be really surprising to me if just cloning added up to so much time. : https://github.com/nolanlawson/worker-pouch/blob/master/README.md#performance-benefits
Oct 12 2015,
If you offload to a worker you're still doing one serialization on the main thread (for posting the data) but offloading the key extraction and setting up the IPC call. So That'd be a 2/3 reduction, which is a lot. the other thing is to split the the put() calls into batches, which you can do within one tx. Just wait on the last put's success event from each batch and do the next one. Its still doing work on the main thread, but shouldn't starve the DOM as much.
Oct 12 2016,
This issue has been available for more than 365 days, and should be re-evaluated. Please re-triage this issue. The Hotlist-Recharge-Cold label is applied for tracking purposes, and should not be removed after re-triaging the issue. For more details visit https://www.chromium.org/issue-tracking/autotriage - Your friendly Sheriffbot
Oct 12 2016,
The work to speed up serialization/deserialization is ongoing (see dependent bug). That's where the bulk of the possible performance improvement is going to happen here. I don't know if we're expecting any measurable speedup from converting IPC to Mojo.
Thanks for looking into this. What honestly surprises me is, that in both Chrome 56 and Chromium 58 using WebSQL does not block while the same test using Indexed DB does block the main thread... any further ideas?
It'll be interesting to see the speedup once the serialization changes are in. I note that in nolan's test the IndexedDB operation uses a keypath which incurs a deserialization on each put (to avoid side-effects), whereas the websql one has explicit keys. WebSQL values are also only numbers are numbers or strings, so you're not paying the serialization cost there either. An apples-to-apples comparison would be nice - strings values and explicit keys on both sides.
We have a simple chat client PWA that has offline sync support and stores chat history in IDB. When it's running, it can block my typing of emails or IM's in Gmail by a second or more, the time it takes to type two or three words; which is really annoying. Normally, when your tab is using IDB, you're waiting for the data anyway, so you don't really notice the delay, but when some other tab or process (like a PWA SW) is doing IO, unbeknownst to you, you don't expect your window to completely block. I don't know if this is being made worse by the Service Worker being killed and restarted periodically. I think the priority of this issue should be raised.
kgr@ - do you have a repro? That sounds extremely unusual. Note the user-noticable delays this issue is about are when 10,000 or 100,000 requests are made in one synchronous loop from the main thread; a similar delay would occur if there were a comparable number of postMessage() calls. I highly doubt you're doing that in your chat client so you may be seeing something completely different. Can you share a reduced repro?
I've send repro information in a private email. There doesn't need to be any traffic for this to happen, which is why I think it might be related to IDB startup rather than during actual IO, since the service worker will periodically shutdown.
FYI, re: the discussion around #c11 On comparable hardware (I think it's the same...) in 57 I see around a 2x performance improvement. * The overall time reported by the page for 1000 puts to Indexed DB is down from 141ms to 110ms (but for only 1000 records there's a fair bit of overhead which is not amortized). * The script function that makes the call is down from 64ms to 27ms. * The put() calls are down from 57ms to 18ms I did not break down the time in put() but the speedup are likely due to: (1) faster serialization of records, (2) faster deserialization of records (when extracting the key), (3) switch to Mojo which reduces IPC overhead.
Since Chrome updated to m56, my issue went away.
|► Sign in to add a comment|