Issue metadata
Sign in to add a comment
|
Some WPT tests crash on Win 7 dbg: :FATAL:embedded_worker_instance.cc(524)] Check failed: kInvalidEmbeddedWorkerThreadId != thread_id_ (-1 vs. -1) |
||||||||||||||||||||||
Issue descriptionAt least these tests are flakily crashing: external/wpt/service-workers/service-worker/activation.https.html external/wpt/service-workers/service-worker/unregister-then-register-new-script.https.html external/wpt/service-workers/service-worker/fetch-event-within-sw.https.html Flakiness dashboard: https://test-results.appspot.com/dashboards/flakiness_dashboard.html#testType=webkit_tests&tests=external%2Fwpt%2Fservice-workers%2Fservice-worker%2Factivation.https.html%2Cexternal%2Fwpt%2Fservice-workers%2Fservice-worker%2Funregister-then-register-new-script.https.html%2Cexternal%2Fwpt%2Fservice-workers%2Fservice-worker%2Ffetch-event-within-sw.https.htm%0A The flakiness started around May 8, 2017. I can't repro on Linux Dbg so far, might need Windows. Probable cause: EmbeddedWorkerInstance thinks the service worker is stopped, so it has a thread_id_ of -1. But somehow we get an OnSkipWaiting message from the worker. [4816:7820:0519/000734.808:4805688:FATAL:embedded_worker_instance.cc(524)] Check failed: kInvalidEmbeddedWorkerThreadId != thread_id_ (-1 vs. -1) Backtrace: base::debug::StackTrace::StackTrace [0x010DD537+55] base::debug::StackTrace::StackTrace [0x010DD1D1+17] logging::LogMessage::~LogMessage [0x0113138E+94] content::EmbeddedWorkerInstance::SendMessage [0x12000580+144] content::ServiceWorkerVersion::DidSkipWaiting [0x121CC0CF+79] content::ServiceWorkerVersion::OnSkipWaiting [0x121D0FCB+43] base::DispatchToMethodImpl<content::ServiceWorkerVersion *,void (__thiscall content::ServiceWorkerVersion::*)(int),std::tuple<int> const &,0> [0x121BD1B6+38] base::DispatchToMethod<content::ServiceWorkerVersion *,void (__thiscall content::ServiceWorkerVersion::*)(int),std::tuple<int> const &> [0x121BCE07+39] IPC::DispatchToMethod<content::ServiceWorkerVersion,void (__thiscall content::ServiceWorkerVersion::*)(int),void,std::tuple<int> > [0x121BCFC8+24] IPC::MessageT<ServiceWorkerHostMsg_SkipWaiting_Meta,std::tuple<int>,void>::Dispatch<content::ServiceWorkerVersion,content::ServiceWorkerVersion,void,void (__thiscall content::ServiceWorkerVersion::*)(int)> [0x121BC515+213] content::ServiceWorkerVersion::OnMessageReceived [0x121CF37C+1596] content::EmbeddedWorkerInstance::OnMessageReceived [0x11FFD453+99] content::EmbeddedWorkerRegistry::OnMessageReceived [0x1200B7E8+56] content::ServiceWorkerDispatcherHost::OnMessageReceived [0x120E386B+3147] content::BrowserMessageFilter::Internal::DispatchMessageW [0x10446732+50] content::BrowserMessageFilter::Internal::OnMessageReceived [0x10446C5B+267] IPC::MessageFilterRouter::TryFilters [0x015ECF67+215] IPC::MessageFilterRouter::TryFilters [0x015ECEF0+96] IPC::ChannelProxy::Context::TryFilters [0x015A4BA8+184] IPC::ChannelProxy::Context::OnMessageReceived [0x015A3CD3+19] IPC::ChannelMojo::OnMessageReceived [0x01594616+230] IPC::internal::MessagePipeReader::Receive [0x015B24BC+572] IPC::mojom::ChannelStubDispatch::Accept [0x001C93C4+1012] IPC::mojom::ChannelStub<mojo::RawPtrImplRefTraits<IPC::mojom::Channel> >::Accept [0x015B1BDB+59] mojo::InterfaceEndpointClient::HandleValidatedMessage [0x0166B788+1064] mojo::InterfaceEndpointClient::HandleIncomingMessageThunk::Accept [0x0166A3D6+22] mojo::FilterChain::Accept [0x01660DB3+291] mojo::InterfaceEndpointClient::HandleIncomingMessage [0x0166B32A+154] base::WaitableEvent::`scalar deleting destructor' [0x015CB44B+1179] mojo::FilterChain::Accept [0x01660DB3+291] mojo::Connector::ReadSingleMessage [0x01658F48+264] mojo::Connector::ReadAllAvailableMessages [0x01658B9D+61] mojo::Connector::OnHandleReadyInternal [0x01658531+177] mojo::Connector::OnWatcherHandleReady [0x016586B3+19] base::internal::FunctorTraits<void (__thiscall mojo::Connector::*)(unsigned int),void>::Invoke<mojo::Connector *,unsigned int> [0x01655D61+33] base::internal::InvokeHelper<0,void>::MakeItSo<void (__thiscall mojo::Connector::*const &)(unsigned int),mojo::Connector *,unsigned int> [0x01655FC1+49] base::internal::Invoker<base::internal::BindState<void (__thiscall mojo::Connector::*)(unsigned int),base::internal::UnretainedWrapper<mojo::Connector> >,void __cdecl(unsigned int)>::RunImpl<void (__thiscall mojo::Connector::*const &)(unsigned int),std::t [0x01656028+72] base::internal::Invoker<base::internal::BindState<void (__thiscall mojo::Connector::*)(unsigned int),base::internal::UnretainedWrapper<mojo::Connector> >,void __cdecl(unsigned int)>::Run [0x01659201+49] base::Callback<void __cdecl(unsigned int),1,1>::Run [0x016DC8DE+46] mojo::SimpleWatcher::OnHandleReady [0x016DC748+264] base::internal::FunctorTraits<void (__thiscall mojo::SimpleWatcher::*)(int,unsigned int),void>::Invoke<base::WeakPtr<mojo::SimpleWatcher> const &,int const &,unsigned int const &> [0x016DA713+51] base::internal::InvokeHelper<1,void>::MakeItSo<void (__thiscall mojo::SimpleWatcher::*const &)(int,unsigned int),base::WeakPtr<mojo::SimpleWatcher> const &,int const &,unsigned int const &> [0x016DA7EF+79] base::internal::Invoker<base::internal::BindState<void (__thiscall mojo::SimpleWatcher::*)(int,unsigned int),base::WeakPtr<mojo::SimpleWatcher>,int,unsigned int>,void __cdecl(void)>::RunImpl<void (__thiscall mojo::SimpleWatcher::*const &)(int,unsigned int [0x016DA882+114] base::internal::Invoker<base::internal::BindState<void (__thiscall mojo::SimpleWatcher::*)(int,unsigned int),base::WeakPtr<mojo::SimpleWatcher>,int,unsigned int>,void __cdecl(void)>::Run [0x016DC924+36] base::Callback<void __cdecl(void),0,0>::Run [0x01079A25+53] base::debug::TaskAnnotator::RunTask [0x010E449C+476] base::MessageLoop::RunTask [0x0115FDC2+626] base::MessageLoop::DeferOrRunPendingTask [0x0115E212+50] base::MessageLoop::DoWork [0x0115E852+242] base::MessagePumpForIO::DoRunLoop [0x01166661+33] base::MessagePumpWin::Run [0x0116774B+123] base::MessageLoop::Run [0x0115FA4F+191] base::RunLoop::Run [0x0121E0EA+186] base::Thread::Run [0x012BA311+273] content::BrowserThreadImpl::IOThreadRun [0x1152B420+32] content::BrowserThreadImpl::Run [0x1152CF39+377] base::Thread::ThreadMain [0x012BB45F+863] base::PlatformThread::Sleep [0x0129605C+380] BaseThreadInitThunk [0x74F3338A+18] RtlInitializeExceptionChain [0x77039902+99] RtlInitializeExceptionChain [0x770398D5+54]
,
May 19 2017
There's at least one Linux crash for external/wpt/service-workers/service-worker/fetch-event-within-sw.https.html so it should be reproducible on Linux. https://storage.googleapis.com/chromium-layout-test-archives/WebKit_Linux_Trusty/26558/layout-test-results/results.html
,
May 26 2017
,
Jun 7 2017
I could reproduce the activation.https.html crash. $ third_party/WebKit/Tools/Scripts/run-webkit-tests --target=Default --no-retry-failures --exit-after-n-failures=1 --iterations=100 external/wpt/service-workers/service-worker/activation.https.html It crashed on the 84th iteration. The crash is: [27970:27992:0607/175145.845468:2447228281053:FATAL:embedded_worker_instance.cc(524)] Check failed: kInvalidEmbeddedWorkerThreadId != thread_id_ (-1 vs. -1) #0 0x7fc1074fab67 base::debug::StackTrace::StackTrace() #1 0x7fc10752064d logging::LogMessage::~LogMessage() #2 0x7fc108417210 content::EmbeddedWorkerInstance::SendMessage() #3 0x7fc1084b5820 content::ServiceWorkerVersion::OnSkipWaiting() #4 0x7fc1084b5631 _ZN3IPC8MessageTI37ServiceWorkerHostMsg_SkipWaiting_MetaSt5tupleIJiEEvE8DispatchIN7content20ServiceWorkerVersionES7_vMS7_FviEEEbPKNS_7MessageEPT_PT0_PT1_T2_ #5 0x7fc1084b2e35 content::ServiceWorkerVersion::OnMessageReceived() The other flaky failures have different stacks. But it's odd they all started failing at the same time, around May 15. It suggests a single CL that caused it. Here is the crash for unregister-then-register-new-script.https.html from the bots which I didn't try to repro yet. [6352:5684:0606/213245.661:5672414:FATAL:ref_counted.h(95)] Check failed: CalledOnValidSequence(). Backtrace: base::debug::StackTrace::StackTrace [0x015DDE17+55] base::debug::StackTrace::StackTrace [0x015DDAB1+17] logging::LogMessage::~LogMessage [0x016320BE+94] base::subtle::RefCountedBase::Release [0x015770BD+253] base::RefCounted<content::ServiceWorkerRegistration>::Release [0x104CD172+18] scoped_refptr<content::ServiceWorkerRegistration>::Release [0x1143B24E+14] scoped_refptr<content::ServiceWorkerRegistration>::~scoped_refptr<content::ServiceWorkerRegistration> [0x11439DAA+26] std::_Tuple_val<scoped_refptr<content::ServiceWorkerRegistration> >::~_Tuple_val<scoped_refptr<content::ServiceWorkerRegistration> > [0x1146B4BF+15] std::tuple<scoped_refptr<content::ServiceWorkerRegistration>,base::Callback<void __cdecl(enum content::ServiceWorkerStatusCode),1,1>,scoped_refptr<content::ServiceWorkerVersion> >::~tuple<scoped_refptr<content::ServiceWorkerRegistration>,base::Callback<vo [0x121C9982+18] base::internal::BindState<void (__thiscall content::ServiceWorkerRegistration::*)(base::Callback<void __cdecl(enum content::ServiceWorkerStatusCode),1,1> const &,scoped_refptr<content::ServiceWorkerVersion>,enum content::ServiceWorkerStatusCode),scoped_re [0x121C9522+18] base::internal::BindState<void (__thiscall content::ServiceWorkerRegistration::*)(base::Callback<void __cdecl(enum content::ServiceWorkerStatusCode),1,1> const &,scoped_refptr<content::ServiceWorkerVersion>,enum content::ServiceWorkerStatusCode),scoped_re [0x121CA5AF+15] base::internal::BindState<void (__thiscall content::ServiceWorkerRegistration::*)(base::Callback<void __cdecl(enum content::ServiceWorkerStatusCode),1,1> const &,scoped_refptr<content::ServiceWorkerVersion>,enum content::ServiceWorkerStatusCode),scoped_re [0x121CBDA2+34] base::internal::BindStateBaseRefCountTraits::Destruct [0x0157D24F+15] base::RefCountedThreadSafe<base::internal::BindStateBase,base::internal::BindStateBaseRefCountTraits>::Release [0x01576F2F+31] scoped_refptr<base::internal::BindStateBase>::Release [0x0157D33B+11] scoped_refptr<base::internal::BindStateBase>::~scoped_refptr<base::internal::BindStateBase> [0x0157D04A+26] base::internal::CallbackBase<0>::~CallbackBase<0> [0x0157D01F+15] base::internal::CallbackBase<1>::~CallbackBase<1> [0x0157391F+15] base::Callback<void __cdecl(enum content::ServiceWorkerStatusCode),1,1>::~Callback<void __cdecl(enum content::ServiceWorkerStatusCode),1,1> [0x104C45C0+16] std::_Tuple_val<base::Callback<void __cdecl(enum content::ServiceWorkerStatusCode),1,1> >::~_Tuple_val<base::Callback<void __cdecl(enum content::ServiceWorkerStatusCode),1,1> > [0x1146B3BF+15] std::tuple<base::Callback<void __cdecl(enum content::ServiceWorkerStatusCode),1,1>,content::ServiceWorkerDatabase::RegistrationData>::~tuple<base::Callback<void __cdecl(enum content::ServiceWorkerStatusCode),1,1>,content::ServiceWorkerDatabase::Registrati [0x121F9CC5+21] std::tuple<base::WeakPtr<content::ServiceWorkerStorage>,base::Callback<void __cdecl(enum content::ServiceWorkerStatusCode),1,1>,content::ServiceWorkerDatabase::RegistrationData>::~tuple<base::WeakPtr<content::ServiceWorkerStorage>,base::Callback<void __cd [0x121FA08D+29] base::internal::BindState<void (__thiscall content::ServiceWorkerStorage::*)(base::Callback<void __cdecl(enum content::ServiceWorkerStatusCode),1,1> const &,content::ServiceWorkerDatabase::RegistrationData const &,GURL const &,content::ServiceWorkerDataba [0x121F8992+18] base::internal::BindState<void (__thiscall content::ServiceWorkerStorage::*)(base::Callback<void __cdecl(enum content::ServiceWorkerStatusCode),1,1> const &,content::ServiceWorkerDatabase::RegistrationData const &,GURL const &,content::ServiceWorkerDataba [0x121FC50F+15] base::internal::BindState<void (__thiscall content::ServiceWorkerStorage::*)(base::Callback<void __cdecl(enum content::ServiceWorkerStatusCode),1,1> const &,content::ServiceWorkerDatabase::RegistrationData const &,GURL const &,content::ServiceWorkerDataba [0x121FF212+34] base::internal::BindStateBaseRefCountTraits::Destruct [0x0157D24F+15] base::RefCountedThreadSafe<base::internal::BindStateBase,base::internal::BindStateBaseRefCountTraits>::Release [0x01576F2F+31] scoped_refptr<base::internal::BindStateBase>::Release [0x0157D33B+11] scoped_refptr<base::internal::BindStateBase>::~scoped_refptr<base::internal::BindStateBase> [0x0157D04A+26] base::internal::CallbackBase<0>::~CallbackBase<0> [0x0157D01F+15] base::internal::CallbackBase<1>::~CallbackBase<1> [0x0157391F+15] base::Callback<void __cdecl(GURL const &,content::ServiceWorkerDatabase::RegistrationData const &,std::vector<__int64,std::allocator<__int64> > const &,enum content::ServiceWorkerDatabase::Status),1,1>::~Callback<void __cdecl(GURL const &,content::Service [0x121F9160+16] std::_Tuple_val<base::Callback<void __cdecl(GURL const &,content::ServiceWorkerDatabase::RegistrationData const &,std::vector<__int64,std::allocator<__int64> > const &,enum content::ServiceWorkerDatabase::Status),1,1> >::~_Tuple_val<base::Callback<void __ [0x121F95EF+15] std::tuple<base::Callback<void __cdecl(GURL const &,content::ServiceWorkerDatabase::RegistrationData const &,std::vector<__int64,std::allocator<__int64> > const &,enum content::ServiceWorkerDatabase::Status),1,1> >::~tuple<base::Callback<void __cdecl(GURL [0x121F9C5F+15] std::tuple<std::vector<content::ServiceWorkerDatabase::ResourceRecord,std::allocator<content::ServiceWorkerDatabase::ResourceRecord> >,base::Callback<void __cdecl(GURL const &,content::ServiceWorkerDatabase::RegistrationData const &,std::vector<__int64,st [0x121FA70A+26] std::tuple<content::ServiceWorkerDatabase::RegistrationData,std::vector<content::ServiceWorkerDatabase::ResourceRecord,std::allocator<content::ServiceWorkerDatabase::ResourceRecord> >,base::Callback<void __cdecl(GURL const &,content::ServiceWorkerDatabase [0x121F9AFA+26] std::tuple<scoped_refptr<base::SingleThreadTaskRunner>,content::ServiceWorkerDatabase::RegistrationData,std::vector<content::ServiceWorkerDatabase::ResourceRecord,std::allocator<content::ServiceWorkerDatabase::ResourceRecord> >,base::Callback<void __cdecl [0x121FA49D+29] std::tuple<content::ServiceWorkerDatabase *,scoped_refptr<base::SingleThreadTaskRunner>,content::ServiceWorkerDatabase::RegistrationData,std::vector<content::ServiceWorkerDatabase::ResourceRecord,std::allocator<content::ServiceWorkerDatabase::ResourceReco [0x121F994F+15] base::internal::BindState<void (__cdecl*)(content::ServiceWorkerDatabase *,scoped_refptr<base::SequencedTaskRunner>,content::ServiceWorkerDatabase::RegistrationData const &,std::vector<content::ServiceWorkerDatabase::ResourceRecord,std::allocator<content: [0x121F84D2+18] base::internal::BindState<void (__cdecl*)(content::ServiceWorkerDatabase *,scoped_refptr<base::SequencedTaskRunner>,content::ServiceWorkerDatabase::RegistrationData const &,std::vector<content::ServiceWorkerDatabase::ResourceRecord,std::allocator<content: [0x121FBE8F+15] base::internal::BindState<void (__cdecl*)(content::ServiceWorkerDatabase *,scoped_refptr<base::SequencedTaskRunner>,content::ServiceWorkerDatabase::RegistrationData const &,std::vector<content::ServiceWorkerDatabase::ResourceRecord,std::allocator<content: [0x121FEB92+34] base::internal::BindStateBaseRefCountTraits::Destruct [0x0157D24F+15] base::RefCountedThreadSafe<base::internal::BindStateBase,base::internal::BindStateBaseRefCountTraits>::Release [0x01576F2F+31] scoped_refptr<base::internal::BindStateBase>::Release [0x0157D33B+11] scoped_refptr<base::internal::BindStateBase>::~scoped_refptr<base::internal::BindStateBase> [0x0157D04A+26] base::internal::CallbackBase<0>::~CallbackBase<0> [0x0157D01F+15] base::Callback<void __cdecl(void),0,0>::~Callback<void __cdecl(void),0,0> [0x01579ECF+15] base::Callback<void __cdecl(void),0,0>::Run [0x0157A310+64] base::SequencedWorkerPool::Inner::ThreadLoop [0x017B4777+1207] base::SequencedWorkerPool::Worker::Run [0x017B3AC6+326] base::SimpleThread::ThreadMain [0x017C305F+127] base::PlatformThread::Sleep [0x017A202C+380] BaseThreadInitThunk [0x7669338A+18] RtlInitializeExceptionChain [0x775E9902+99] RtlInitializeExceptionChain [0x775E98D5+54]
,
Jun 7 2017
fetch-event-within-sw.https.html is the same skipWaiting() crash.
,
Jul 28 2017
falken@ Ping? It is still crashing.
,
Aug 3 2017
?
,
Aug 10 2017
Sorry, no time.
,
Aug 10 2017
+xaiofeng: this may be interesting for you to look at, if you're looking at SkipWaiting message ordering lately.
,
Aug 10 2017
typo: xiaofeng
,
Aug 17 2017
Is the crash still happens? I can't reproduce it with: $ third_party/WebKit/Tools/Scripts/run-webkit-tests --target=Default --no-retry-failures --exit-after-n-failures=1 --iterations=100 external/wpt/service-workers/service-worker/xxxx(the three test htmls) And it seems the current flakiness dashboard doesn't show flakily crashing? how to see the previous flakiness dashboard?
,
Aug 17 2017
They looks still flaky according to the flakiness dashboard. For example, this is the build where external/wpt/service-workers/service-worker/activation.https.html failed. https://build.chromium.org/p/chromium.fyi/builders/WebKit%20Linux%20-%20RandomOrder/builds/12206 It looks green, but seeing the "layout_test_results" in "archive_webkit_tests_results", you can see the failure. https://storage.googleapis.com/chromium-layout-test-archives/WebKit_Linux_-_RandomOrder/12206/layout-test-results/results.html This is the trace from the results.html. === [14421:14507:0816/200244.435104:27268908129:FATAL:embedded_worker_instance.cc(567)] Check failed: kInvalidEmbeddedWorkerThreadId != thread_id_ (-1 vs. -1) #0 0x000001a99197 base::debug::StackTrace::StackTrace() #1 0x000001ab04f1 logging::LogMessage::~LogMessage() #2 0x00000176c1f0 content::EmbeddedWorkerInstance::SendMessage() #3 0x0000018081fb content::ServiceWorkerVersion::OnSkipWaiting() #4 0x000001808011 _ZN3IPC8MessageTI37ServiceWorkerHostMsg_SkipWaiting_MetaNSt3__15tupleIJiEEEvE8DispatchIN7content20ServiceWorkerVersionES8_vMS8_FviEEEbPKNS_7MessageEPT_PT0_PT1_T2_ #5 0x000001805a01 content::ServiceWorkerVersion::OnMessageReceived() #6 0x00000176e0d4 content::EmbeddedWorkerInstance::OnMessageReceived() #7 0x000001771052 content::EmbeddedWorkerRegistry::OnMessageReceived() #8 0x0000017a6a21 content::ServiceWorkerDispatcherHost::OnMessageReceived() #9 0x0000019739dc content::BrowserMessageFilter::Internal::OnMessageReceived() #10 0x000001ca466d IPC::MessageFilterRouter::TryFilters() #11 0x000001c91dbd IPC::ChannelProxy::Context::TryFilters() #12 0x000001c9209f IPC::ChannelProxy::Context::OnMessageReceived() #13 0x000001c8ebb7 IPC::ChannelMojo::OnMessageReceived() #14 0x000001c96a2f IPC::internal::MessagePipeReader::Receive() #15 0x000001ca5414 IPC::mojom::ChannelStubDispatch::Accept() #16 0x000001c7535f mojo::InterfaceEndpointClient::HandleValidatedMessage() #17 0x000001c88596 mojo::FilterChain::Accept() #18 0x000001c7660c mojo::InterfaceEndpointClient::HandleIncomingMessage() #19 0x000001c9a2c1 IPC::(anonymous namespace)::ChannelAssociatedGroupController::Accept() #20 0x000001c88596 mojo::FilterChain::Accept() #21 0x000001c742b1 mojo::Connector::ReadSingleMessage() #22 0x000001c74cd2 mojo::Connector::ReadAllAvailableMessages() #23 0x000001c74b4c mojo::Connector::OnHandleReadyInternal() #24 0x0000008a3350 mojo::SimpleWatcher::DiscardReadyState() #25 0x000001c8b082 mojo::SimpleWatcher::OnHandleReady() #26 0x000001c8b548 _ZN4base8internal7InvokerINS0_9BindStateIMN4mojo13SimpleWatcherEFvijRKNS3_18HandleSignalsStateEEJNS_7WeakPtrIS4_EEijS5_EEEFvvEE7RunImplIRKS9_RKNSt3__15tupleIJSB_ijS5_EEEJLm0ELm1ELm2ELm3EEEEvOT_OT0_NSI_16integer_sequenceImJXspT1_EEEE #27 0x000001a998cb base::debug::TaskAnnotator::RunTask() #28 0x000001ab6cfd base::MessageLoop::RunTask() #29 0x000001ab735f base::MessageLoop::DoWork() #30 0x000001aba2b9 base::MessagePumpLibevent::Run() #31 0x000001ab679a base::MessageLoop::Run() #32 0x000001ad83d7 base::RunLoop::Run() #33 0x000001afef4c base::Thread::Run() #34 0x00000142a166 content::BrowserThreadImpl::IOThreadRun() #35 0x00000142a361 content::BrowserThreadImpl::Run() #36 0x000001aff4f1 base::Thread::ThreadMain() #37 0x000001af7d4c base::(anonymous namespace)::ThreadFunc() #38 0x7f1e96b0e184 start_thread #39 0x7f1e91ff0bed clone
,
Aug 17 2017
Thanks a lot shimazu, I only get the following content from the results.html: httpd access log: access_log.txt httpd error log: error_log.txt Is it need permission?
,
Aug 17 2017
Hmm, me too. the log seems to be swapped out. Please looking for latest failures from the flakiness dashboard.
,
Sep 13 2017
I tried again and could not repro on linux this time.
The crash happens at:
DCHECK_NE(kInvalidEmbeddedWorkerThreadId, thread_id_);
if (status_ != EmbeddedWorkerStatus::RUNNING &&
status_ != EmbeddedWorkerStatus::STARTING) {
return SERVICE_WORKER_ERROR_IPC_FAILED;
}
It's weird that we require thread_id_ to be set (which implies running or got to a certain point in starting), and fail gracefully if stopped or stopping. The current code has a weird requirement to check thread_id_ which can be kInvalid even if STARTING. It's probably better to just do:
if ((status_ != EmbeddedWorkerStatus::RUNNING &&
status_ != EmbeddedWorkerStatus::STARTING) || thread_id == kInvalidEmbeddedWorkerThreadId) {
return;
}
But I'm going to deprioritize this as mojofication should end this issue and in production I think we'll just fail gracefully with no crash.
,
May 14 2018
Obsolete due to Mojo, SendMessage() no longer exists. |
|||||||||||||||||||||||
►
Sign in to add a comment |
|||||||||||||||||||||||
Comment 1 by falken@chromium.org
, May 19 2017