New issue
Advanced search Search tips

Issue 774116 link

Starred by 1 user

Issue metadata

Status: WontFix
Owner:
Closed: Oct 2017
Components:
EstimatedDays: ----
NextAction: ----
OS: ----
Pri: 3
Type: Bug



Sign in to add a comment

Weirdness with SimpleEntryImpl::doomed_ and reused SimpleEntryImpl objects.

Project Member Reported by morlovich@chromium.org, Oct 12 2017

Issue description

There is a scenario where SimpleCache can reuse SimpleEntryImpl objects for multiple opens: if you do something like this:

backend->CreateEntry(key, &entry);
...
entry->Close();
backend->OpenEntry(key, &entry);

OpenEntry will output the same pointer as the CreateEntry did (the close will always be pending from PoV of I/O thread, as we haven't given its event loop a chance to run --- if we did, whether the pointer was the same or not would be timing dependent, but likely to be same since disk I/O usually takes more time than some function cals).

CloseOperationComplete calls MarkUnitialized which helps the object be prepared for reuse like this.

What seems problematic is that the doomed_ bit is monotonic, which means if one does something like:

Create/Close/backend->Doom/Create/Close/backend->Doom my code reading is that the second Doom will get ignored.

(As a side note, MarkUnitialized should probably also dump the stream 0/stream 1 buffers, or there may be a case where one can read stale content after a CreateEntry --- will try to come up with a testcase for this...)


 
Hmm, this specific scenario won't fail since the Doom call will transition the entry to entries_pending_doom_ immediately upon invication, so Create will get a fresh entry. 

So maybe the side note scenario is the only one to worry about, and I am not sure I can make it visible yet.
by Doom above I meant SimpleEntryImpl::DoomEntry.

Now SimpleSynchronousEntry-side invocation of Doom is a bit different, since its effect is, when the completion becomes visible to I/O thread, to remove the entry from actives table, so any op after that will get a fresh object, but things queued up before that will still go on an old one. So.. if there is both a failure and a queued up explicit doom, I think there is a chance that the queued doom may race against operations on an entry created after this one. Let's see if I can make a testcase.


Status: WontFix (was: Started)
Hmm, no, any queued up explicit Doom will immediately put the entry into entries_pending_doom_ so later creates, etc., will get serialized properly. One can end up with DoomEntryInternal running after the SSE doomed the entry itself, but that will just call a redundant unlink. It gets slightly more complex with doom-rename, since the entry may have been renamed and you may or my not know about it, but it'll still at worst do useless ops...

Sign in to add a comment