New issue
Advanced search Search tips
Note: Color blocks (like or ) mean that a user may not be available. Tooltip shows the reason.

Issue 854616 link

Starred by 1 user

Issue metadata

Status: Verified
Owner:
Closed: Jun 2018
Cc:
Components:
EstimatedDays: ----
NextAction: ----
OS: Linux
Pri: 1
Type: Bug-Regression



Sign in to add a comment

Null-dereference READ in mojo::InterfaceEndpointClient::~InterfaceEndpointClient

Project Member Reported by ClusterFuzz, Jun 20 2018

Issue description

Detailed report: https://clusterfuzz.com/testcase?key=5643756834127872

Fuzzer: inferno_layout_test_unmodified
Job Type: linux_msan_chrome
Platform Id: linux

Crash Type: Null-dereference READ
Crash Address: 0x000000000000
Crash State:
  mojo::InterfaceEndpointClient::~InterfaceEndpointClient
  mojo::InterfaceEndpointClient::~InterfaceEndpointClient
  content::ServiceWorkerProviderContext::~ServiceWorkerProviderContext
  
Sanitizer: memory (MSAN)

Regressed: https://clusterfuzz.com/revisions?job=linux_msan_chrome&range=560370:560378

Reproducer Testcase: https://clusterfuzz.com/download?testcase_id=5643756834127872

Issue filed automatically.

See https://github.com/google/clusterfuzz-tools for more information.
 
Cc: roc...@chromium.org brajkumar@chromium.org
Components: Internals>Mojo
Labels: -Type-Bug M-68 Test-Predator-Wrong CF-NeedsTriage Type-Bug-Regression
Unable to find actual suspect through code search and also observing no suspected CL's under regression range, hence adding appropriate label and requesting someone from mojo team to look in to this issue.

Thanks!

Comment 2 by roc...@chromium.org, Jun 21 2018

Cc: falken@chromium.org
Components: -Internals>Mojo
Generally when you see mojo at the top of the stack, it's helpful to look a few frames down the stack to see if the crashes are likely to be mojo in general, or from a specific feature which is using mojo.

In this case it looks like probably a service worker issue; namely ServiceWorkerProviderContext's destructor might be getting invoked on the wrong thread. +falken@ for that

Comment 3 by falken@chromium.org, Jun 22 2018

Cc: leon....@intel.com
Components: Blink>ServiceWorker
SWPC is base::RefCountedThreadSafe but has a kind of unusual deleter.

void ServiceWorkerProviderContext::DestructOnMainThread() const {
  if (!main_thread_task_runner_->RunsTasksInCurrentSequence() &&
      main_thread_task_runner_->DeleteSoon(FROM_HERE, this)) {
    return;
  }
  delete this;
}

Is that deleter right?

Comment 4 by leon....@intel.com, Jun 22 2018

Owner: leon....@intel.com
Status: Started (was: Untriaged)
Setting me as owner of this bug so that I can access the detailed report: https://clusterfuzz.com/testcase?key=5643756834127872

Comment 5 by leon....@intel.com, Jun 22 2018

I'm unclear of the root cause, but want to share some findings until now:
  - The crash stack shows the ServiceWorkerProviderContext is destructed on the correct thread, i.e the main thread.
  - The crash point happens during the destruction of ServiceWorkerProviderContext::binding_, which is bound on the main thread.
  - More specifically, the null-dereference happens while destroying the vector: mojo::FilterChain::filters_. |filters_| is supposed to have only one element with type ServiceWorkerContainerRequestValidator throughout lifetime of the FilterChain. I did not find any code which touch it and could potentially lead to any nullptr...

MemorySanitizer:DEADLYSIGNAL
	==1==ERROR: MemorySanitizer: SEGV on unknown address 0x000000000000 (pc 0x55956e33c3b4 bp 0x7ffd6ce64880 sp 0x7ffd6ce64810 T1)
	==1==The signal is caused by a READ memory access.
	==1==Hint: address points to the zero page.
	#0 0x55956e33c3b3 in operator() buildtools/third_party/libc++/trunk/include/memory:2321:5
	#1 0x55956e33c3b3 in reset buildtools/third_party/libc++/trunk/include/memory:2634
	#2 0x55956e33c3b3 in ~unique_ptr buildtools/third_party/libc++/trunk/include/memory:2588
	#3 0x55956e33c3b3 in destroy buildtools/third_party/libc++/trunk/include/memory:1866
	#4 0x55956e33c3b3 in __destroy<std::__1::unique_ptr<mojo::MessageReceiver, std::__1::default_delete<mojo::MessageReceiver> > > buildtools/third_party/libc++/trunk/include/memory:1728
	#5 0x55956e33c3b3 in destroy<std::__1::unique_ptr<mojo::MessageReceiver, std::__1::default_delete<mojo::MessageReceiver> > > buildtools/third_party/libc++/trunk/include/memory:1596
	#6 0x55956e33c3b3 in __destruct_at_end buildtools/third_party/libc++/trunk/include/vector:413
	#7 0x55956e33c3b3 in clear buildtools/third_party/libc++/trunk/include/vector:356
	#8 0x55956e33c3b3 in ~__vector_base buildtools/third_party/libc++/trunk/include/vector:441
	#9 0x55956e33c3b3 in mojo::FilterChain::~FilterChain() mojo/public/cpp/bindings/lib/filter_chain.cc:28
	#10 0x55956e33f36c in mojo::InterfaceEndpointClient::~InterfaceEndpointClient() mojo/public/cpp/bindings/lib/interface_endpoint_client.cc:173:1
	#11 0x55956e33f517 in mojo::InterfaceEndpointClient::~InterfaceEndpointClient() mojo/public/cpp/bindings/lib/interface_endpoint_client.cc:169:53

Project Member

Comment 6 by ClusterFuzz, Jun 22 2018

ClusterFuzz has detected this issue as fixed in range 569503:569504.

Detailed report: https://clusterfuzz.com/testcase?key=5643756834127872

Fuzzer: inferno_layout_test_unmodified
Job Type: linux_msan_chrome
Platform Id: linux

Crash Type: Null-dereference READ
Crash Address: 0x000000000000
Crash State:
  mojo::InterfaceEndpointClient::~InterfaceEndpointClient
  mojo::InterfaceEndpointClient::~InterfaceEndpointClient
  content::ServiceWorkerProviderContext::~ServiceWorkerProviderContext
  
Sanitizer: memory (MSAN)

Regressed: https://clusterfuzz.com/revisions?job=linux_msan_chrome&range=560370:560378
Fixed: https://clusterfuzz.com/revisions?job=linux_msan_chrome&range=569503:569504

Reproducer Testcase: https://clusterfuzz.com/download?testcase_id=5643756834127872

See https://github.com/google/clusterfuzz-tools for more information.

If you suspect that the result above is incorrect, try re-doing that job on the test case report page.
Project Member

Comment 7 by ClusterFuzz, Jun 22 2018

Labels: ClusterFuzz-Verified
Status: Verified (was: Started)
ClusterFuzz testcase 5643756834127872 is verified as fixed, so closing issue as verified.

If this is incorrect, please add ClusterFuzz-Wrong label and re-open the issue.

Sign in to add a comment