New issue
Advanced search Search tips
Note: Color blocks (like or ) mean that a user may not be available. Tooltip shows the reason.
Starred by 19 users

Issue metadata

Status: Fixed
Closed: Dec 2016
EstimatedDays: ----
NextAction: ----
OS: Mac
Pri: 2
Type: Bug

Blocked on:
issue 648836

Sign in to add a comment

ServiceWorker's Link rel=serviceworker leads to botnet-like persistent JS worker

Reported by, Nov 4 2016

Issue description

UserAgent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.71 Safari/537.36

Steps to reproduce the problem:
1. visit
2. close the tab 
3. few mins later open Devtools on any other page and see under Application/ServiceWorkers/Show All a service worker in Running state forever

What is the expected behavior?

What went wrong?
Normally we have small attack window (few secs to few mins) to exploit the user visiting the website. ServiceWorkers greatly changed that. 

There are numerous ways such as background sync/Push API that can wake up the worker and execute JS code in context of the worker whenever attacker wants that. It somewhat similar to a classic botnet with lower privileges though. 

Let's start with the most dangerous and silent exploit I know so far. There are other ways, we can talk about them too if you're interested. 

We need to insert our picture on any https:// website such as blog or news platform, every Chrome 54+ will fetch the image, parse Link header, verify Origin-Trial token (which is easy to get)
 and run our JS code. It's already bad on its own - how come an external image runs an external JS code? 

A minute later, just before the worker is terminated, it fetches itself, which recursively creates another worker that will run for another minute. It can grab a new job from the server such as cryptocurrency mining operation/minor DDoS/CSRF via fetch with {credentials:true} or other tasks. 

Did this work before? No 

Does this work in other browsers? N/A

Chrome version: 54.0.2840.71  Channel: n/a
OS Version: OS X 10.12.1
Flash Version: Shockwave Flash 23.0 r0

Hide this as a security issue, maybe? Not sure what are your thoughts.

Comment 1 Deleted

Blockedon: 648836
Status: Available (was: Unconfirmed)
I think this is a special case of  issue 648836 . See also  issue 647943 , and, and mek's comment at

Comment 3 by, Nov 7 2016

Yes, foreign fetch ping pong & recursive Link registration are still working. Are you saying the fix will land soon?

If you allow a SW FFetch another SW which FFetches & wakes up another SW, i don't even know how you can fix it reliably. This is an infinite jar of events.

I'd offer something way easier to implement: check if any ORIGIN workers are working for longer than 60 seconds after no ORIGIN windows or windows with refs like <img> to ORIGIN are active, then stop them all. And set a maximum of few mins of working for a background sync/notification event per day. Those also offer background botnets by design.

 Issue 662731  has been merged into this issue.
Labels: -Type-Bug Security_Impact-Stable Security_Severity-Low Type-Bug-Security
I'm labelling this as a Security bug but leaving it public because of the various  similar public discussions that falken@ linked to.

Comment 6 by, Nov 7 2016

Well, it's not public now, open only for SecurityTeam.

Also one use case where this is most dangerous: scrypt calculations. While sha mining is not profitable, scrypt/argon are making use of memory and the botnet of infinite service workers are very handy for this purpose.
Labels: -Type-Bug-Security Type-Bug
Actually making this public now...

Comment 9 by, Nov 8 2016

Yes, I agree that the current situation where service workers can keep themselves alive (via postMessage to same origin service workers, or via foreign fetch to cross origin service workers) is far from ideal. I felt okay launching this as an origin trial anyway since <img> + link header + foreign fetch didn't seem too much worse than <iframe> + postMessage, both of which can cause service workers to stick around indefinitely.

I agree that we'll indeed need to have some kind of condition to determine that a service worker is misbehaving (like your 60 seconds after all windows have been closed), but I think it's trickier than that. For one other events (with possibly the exception of message events, to prevent the message ping-ponging keeping alive workers) should still be able to execute in the background, so there will be some situations where the worker will be kept alive for at least a little bit without any recent windows.

Currently the killing behavior in chrome is pretty basic (well, it's slightly more complicated than this): every service worker has a X minute timeout, the timeout is reset whenever an event is dispatched to the worker. If the timeout expires the worker is killed. I think with some tweaking of this timeout we might be able to at least resolve some of these issues:

- for events triggered by a SW client (i.e. fetch events, message events from windows) keep the current behavior
- for events triggered by the browser (background sync, push, ...) probably also keep the current behavior
- for events somehow triggered by another service worker (foreign fetch, messages from service workers) reset the timeout to the maximum of the current timeout of the triggering and triggered service workers.

I think that should address most (if not all) of the issues around service workers keeping themselves alive longer than they should.

Comment 10 by, Nov 8 2016

Oh wait, that's exactly what jake proposed in the github bug for the post message case.

Comment 11 by, Nov 16 2016

Any milestones on this class of vulns? This is still a security bugs and can be abused right now by anyone.
I'm targeting M58... maybe 57.

Comment 13 by, Nov 17 2016

Status: Started (was: Available)
I should be able to get this fixed for M57 at least.

Comment 14 by, Nov 18 2016

Summary: ServiceWorker's Link rel=serviceworker leads to botnet-like persistent JS worker (was: ServiceWorker's Foreign Fetch leads to botnet-like persistent JS worker)
Actually, I didn't fully read/understand the original described exploit. I assumed from the title that foreign fetch was somehow involved, but the demonstrated persistent behavior actually doesn't have anything to do with foreign fetch at all. The demo only relies on installing a service worker via the Link header.

As such while the proposed mitigation of limiting how long a foreign fetch and/or message event can run when it is triggered by another service worker will fix foreign fetch ping ponging as a way to keep a worker alive, it  doesn't actually do anything to address recursive link registration.

Not sure what the best way to address the recursive registration thing is. Just disabling processing of Link headers for fetches initiated by a service worker would be too limiting. Disabling processing of Link headers for same origin fetches initiated by a service worker wouldn't actually stop this exploit (it would just require mutual recursion with two origins).

Maybe disabling processing of Link headers for fetches initiated by a service worker where that service worker isn't controlling any clients would be a sensible approach. That way we'd still support most interesting use cases, while also limiting the amount of damage this attack can do.

Comment 15 by, Nov 18 2016

mek, that's true, this behavior doesn't exactly depend on foreign fetch but does require an Origin Trial issued for foreign fetch, hence this title. I raised this question in 648836 if this is limited by %.03 rule of origin trials, because if not, it is more serious issue that could be exploited unnoticed right now, in theory.

Speaking of the fix, I still believe the rate limiting shouldn't be tied to events anyhow, and should in my opinion sound more general like "all workers from origin X without a client cannot run for longer than 120 seconds per day". Otherwise more bypass vectors will be found in the future. 

Plus, I think Link-based registration is scary on its own. These days many blogs and comments allow 3rd party images, and each can lead to potential JS exploit (including simple CSRF). Maybe, ignore Link registration for images and ServiceWorker-initiated requests? That would fix the original recursive registration issue.

Comment 16 by, Nov 19 2016

Yeah, in hindsight it probably would have been cleaner to completely separate the Link rel=serviceworker feature as an origin trial separate from the foreign fetch origin trial.

For general background rate limiting something like the budget API ( might be part of the solution (which I believe should help somewhat with clearer restrictions on how much background work in sync/push events a service worker can do). I'm not sure if it would be a good fit for foreign fetch though, as it's much less clear how a foreign fetch service worker would acquire some amount of budget. 

As an initial fix at least making sure that foreign fetch (and postMessage) can't be used to increase the overall time service workers can do work (by limiting the timeout to whatever the existing timeout was) seems pretty reasonable. I do agree that we probably will want more sophisticated heuristics as well at some point to try to detect misbehaving service workers. I'm not convinced that a simple X seconds per Y hours limit is going to be good enough though.

Link based registration is definitely scary, yes. I think by ignoring Link registration for service-worker initiated requests where the service worker doesn't have any controllees as I suggested will be enough to fix the recursive registration issue at least. I'd rather not go further though as I believe there are nice use cases for allowing Link registration on images, style sheets, fonts, etc. Maybe web-sites should be able to opt-out of Link registration for its subresources though (for things like user generated content). I'd rather not go all to way to requiring opt-in though, as that would severely limit the usefulness of the feature (but it will be interesting to see if people that are trying to use foreign fetch think it will still be useful if link registration would be opt-in).

Comment 17 by, Nov 19 2016

How about restricting link registration from service workers that were themselves registered via link?
Should we disable the origin trials to mitigate for now?

Comment 19 by, Nov 20 2016

>How about restricting link registration from service workers that were themselves registered via link?

Good idea, but would require to store "register_method" property

Yes I think origin trials should be disabled until fixed. 
> Maybe disabling processing of Link headers for fetches initiated by a service worker where that service worker isn't controlling any clients would be a sensible approach.

I'm wondering about the predictability impact of this approach. Would developers be surprised that registration succeeds/fails depending on whether the requesting worker has a client? Or would you consider registration without a client never a valid use case?

Comment 21 by, Nov 21 2016

Not sure about disabling the origin trial, I'll talk to some security team people to see what they think. Also disabling the origin trial of course would do nothing for service workers being able to keep themselves alive (via postMessage to themselves). But possibly the recursive spawning of new workers is indeed bad enough that this should be considered.

I think disabling Link header processing for fetches made by service workers without controllees isn't that surprising. What I was thinking is that possibly we'd completely disable this for fetches made from service workers. But that would be surprising if adding a controlling service worker (after which all fetches suddenly could go through there) would change if headers are processed or not.

Guarding this on whether the service worker making the fetch was itself registered via the Link header or not would be a lot more surprising/confusing I think, especially since that state would be persistent (I suppose we could clear the "registered-via-header" flag if a javascript register() call is made that matches an existing registration; but that still seems kind of magic/confusing to me).
Project Member

Comment 22 by, Nov 21 2016

The following revision refers to this bug:

commit e8313bb499c439b0a5091225faf97b06f4cf1182
Author: mek <>
Date: Mon Nov 21 06:44:17 2016

Limit Link header based SW installations for fetches made by SW.

To prevent a service worker from spawning new service workers in the
background, only process link headers on requests made by service
workers if the service worker is controlling at least one client.

BUG= 662443 

Cr-Commit-Position: refs/heads/master@{#433489}


Comment 23 by, Nov 22 2016

Labels: Merge-Request-56

Comment 24 by, Nov 22 2016

Labels: -Merge-Request-56 Merge-Approved-56 Hotlist-Merge-Approved
Your change meets the bar and is auto-approved for M56 (branch: 2924)
Project Member

Comment 25 by, Nov 22 2016

Labels: -merge-approved-56 merge-merged-2924
The following revision refers to this bug:

commit f89f2f4beec7dc806420025f9b76d2c731f918b6
Author: Marijn Kruisselbrink <>
Date: Tue Nov 22 19:19:07 2016

Limit Link header based SW installations for fetches made by SW.

To prevent a service worker from spawning new service workers in the
background, only process link headers on requests made by service
workers if the service worker is controlling at least one client.

BUG= 662443 

Cr-Commit-Position: refs/heads/master@{#433489}
(cherry picked from commit e8313bb499c439b0a5091225faf97b06f4cf1182)

Review URL: .

Cr-Commit-Position: refs/branch-heads/2924@{#63}
Cr-Branched-From: 3a87aecc31cd1ffe751dd72c04e5a96a1fc8108a-refs/heads/master@{#433059}


Labels: TE-Verified-M56 TE-Verified-56.0.2924.10
Verified the issue on Mac 10.11.6 using chrome latest Dev M56-56.0.2924.10 by following steps mentioned in the original comment. Observed that service worker stopped in show all screen.Please find the screen shot for reference.Hence adding TE-Verified label.
283 KB View Download

Comment 27 by, Nov 29 2016

Btw the demo was broken for a while (someone submitted a job with syntax error). Now I fixed it.

Comment 28 by, Dec 1 2016

Status: Fixed (was: Started)
This should be fixed on M56 anyway. There are still ways to keep a service worker alive indefinitely, but those are tracked in  issue 648836  and  issue 647943 , so marking this one as fixed.

Comment 29 by, Dec 1 2016

Labels: M-56

Sign in to add a comment