New issue
Advanced search Search tips

Issue 678798 link

Starred by 12 users

Issue metadata

Status: Available
Owner: ----
Cc:
Components:
EstimatedDays: ----
NextAction: 2019-03-01
OS: Linux , Android , Windows , Chrome , Mac , Fuchsia
Pri: 2
Type: Bug

Blocked on:
issue 852658



Sign in to add a comment

Support long running service workers with FetchEvent.respondWith() while controlled window is open

Reported by reeld...@gmail.com, Jan 5 2017

Issue description

UserAgent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.95 Safari/537.36

Steps to reproduce the problem:
1. Installed a service worker that does a lot of background requests to fetch data that should be delivered as blob to the client.

2. Client sends a request which gets intercepted by the service worker and starts to respond with data which gets downloaded.  

About 5 minutes into the download, the request was sent by the client we get the following Error: "Failed - Network Error".  Through debugging it appears that the service worker is killed and restarted, which causes the socket to disconnect and the download to fail.  

Refer to this thread for more information on how to reproduce: 
https://github.com/jimmywarting/StreamSaver.js/issues/39

What is the expected behavior?
I would expect the service worker to be kept alive with FetchEvent.respondWith() or ExtendableEvent.waitUntil() as long as the controlled page is open. 

What went wrong?
The blob data being streamed to the user (as a file download) is lost.  

Did this work before? No 

Chrome version: 55.0.2883.95  Channel: stable
OS Version: OS X 10.11.4
Flash Version: Shockwave Flash 24.0 r0

This problem goes away when the developer console is open (since ServiceWorkers are not automatically killed) but that is not a sufficient workaround.
 

Comment 1 by ajha@chromium.org, Jan 6 2017

Components: -Blink Blink>ServiceWorker
Labels: TE-NeedsTriageHelp
Cc: jakearchibald@chromium.org
NextAction: 2017-02-01
Status: Available (was: Unconfirmed)
Yep, sorry this is not supported now. We have a hard 5 minute limit on keeping service workers alive. This is intended to defend against wasting battery and CPU, especially with respect to badly or maliciously written service workers.

I don't have a plan for what to do here. Would it be possible to download the data in chunks as a workaround?

Jake: this might be an interesting problem to think about. I'm also curious what Firefox and Edge implementations would do.

Comment 3 by reeld...@gmail.com, Jan 6 2017

Firefox has an open bug regarding this exact same issue. They don't support ReadableStreams yet so it is a lower priority for them, but according to the last comment on the bug it sounds like once streams are implemented they will support keeping the service worker alive for longer periods if it is holding a FetchEvent.respondWith() and the FetchEvent's window is still open. If the FetchEvent's window is closed, the respondWith() keep alive will be revoked.

https://bugzilla.mozilla.org/show_bug.cgi?id=1302715


Comment 4 by bke...@mozilla.com, Jan 6 2017

I think ideally the lifetime here would be extended as long as the "outer" fetch being intercepted is still open.  So if the Client is closed it would normally get canceled.  If no data is received it would ideally be timed out for inactivity.  Either of these should remove the lifetime extension.

But this might be a bit complicated for us to implement at the moment.  But I think long term that is where we would like to go.

I guess we would also have to think about things like beacon which don't cancel on Client close.  We don't want those to be used for arbitrary lifetime extension of the SW.

Comment 5 by falken@chromium.org, Jan 24 2017

See also https://github.com/jakearchibald/background-fetch which appears designed to meet this type of use case.

Comment 6 by reeld...@gmail.com, Jan 24 2017

I would add that it may be worth distinguishing between proposals to keep the background thread alive after the user browses away from the origin page or closes the browser versus keeping it alive while the user still has the origin page loaded. 

The issue raised here is that even if you keep the origin page loaded the worker gets killed.  

Comment 7 by ji...@warting.se, Feb 19 2017

IMO I think background-fetch just adds more complexity and it just gives you yet another way of solving things. I don't really see that much advantages over it. Most of the things can already be handled with the technics we already have today.

The only issue I got with service worker is the hard lifetime limit. You might see it as a defense mechanism. But I see it as blocker.
Some have a good valid reason for keeping the service worker running for a longer time. 
StreamSaver.js is one of them, Another would be webtorrent if webrtc was supported in service worker...

I would rater want the timeout to be removed and instead see some defense mechanism for blocking large cpu/battery/memory/bandwidth draining scripts. 

Comment 8 by falken@chromium.org, Feb 20 2017

NextAction: 2017-01-04

Comment 9 by falken@chromium.org, Feb 21 2017

NextAction: 2017-04-01
NextAction: 2017-10-01
The NextAction date has arrived: 2017-10-01
Labels: -OS-Mac -TE-NeedsTriageHelp
NextAction: 2018-03-01
The NextAction date has arrived: 2018-03-01
NextAction: 2019-03-01
Chrome Remote Desktop is interested in using respondWith() for large data transfers. Is there a path forward for this?
Cc: rkjnsn@chromium.org
Blockedon: 852658
Labels: OS-Android OS-Chrome OS-Fuchsia OS-Linux OS-Mac OS-Windows
rkjnsn@ could you comment more about priorities/plans here? (maybe internally if you want)

Re-reading this, I'm agreeing with the feature request but there may be complexity here. This could be a medium-sized project for a dedicated contributor.

I think I agree with the end-goal vision expressed in comment #4. It feels like it should be possible to mint a keep alive token when the service worker starts intercepting a resource request and revoke it when the outer resource request completes.

This is probably blocked on refactoring timeouts at issue 852658.

Sign in to add a comment