New issue
Advanced search Search tips
Note: Color blocks (like or ) mean that a user may not be available. Tooltip shows the reason.
Starred by 201 users

Comments by non-members will not trigger notification emails to users who starred this issue.

Issue metadata

Status: Assigned
EstimatedDays: ----
NextAction: ----
OS: Linux , Windows , Chrome , Mac
Pri: 1
Type: Bug

  • Only users with EditIssue permission may comment.

Sign in to add a comment

www subdomain is removed even when it isn't the leftmost subdomain

Project Member Reported by, Sep 7

Issue description

1. Go to
2. Observe that Chrome 69 displays it as

I find this surprising -- I assumed we would only drop "www" from the display when it was the root subdomain. Is this intentional, or is this a parsing/regex error?

I can't think of a way to exploit this as-is but if it's a parsing/regex error there could be a bigger bug there, so marking restricted out of an abundance of caution.
Showing comments 12 - 111 of 111 Older
+1 to every point in pkasting's opinion in comment 10 (apart from point (5) about tumblr, which I don't have the context on).  For point (6), isn't this controlled by a field trial?  We might be able to turn it off without an M69 respin if so.
Comment 11: I do not think we should be eliding from the middle (given the URL confusion/spoof risk), so I'm strongly in support of your point (2).
Issue 881948 has been merged into this issue.
Agree with #10. I think this is potentially bad enough to warrant merging to M69, but I'll let actual security experts make the call here. Certainly should be merged to M70 which has just branched.

(4) is very insightful: if we use the PSL then we won't drop the "www" in "" but we will drop the "www" in "". This behaviour seems correct to me.

#11: I agree that doing JUST (4) and nothing else will fix both of the spoofing issues identified above. But (2) also seems like a good candidate to merge also just from the confusion of eliding arbitrary domain labels.
We should merge to 69 if possible. Do we have an owner for this work? tommycli?
Is this finched? We may also want to consider finching this off until we get a fix in to 69. I think the tumblr case is pretty bad.
Does anyone know what the cutoff time is for merging and being included in the respin?

tommycli: what's your availability to make any changes on Monday?
I think we should do all of (2), (4), and (5). If we can get it into the 69 respin, let's do that. I don't think wikipedia is worth the risk.

This is the tumblr reference: With the Chrome 69 update, both on mobile and desktop, it looks like Chrome is auto-assuming that the `m` subdomain (i.e. ``) is the mobile version of the domain and the address bar only shows the TLD. In this case, it's a domain that's fully controlled by a user and with the address-bar masking, they could easily masquerade as Tumblr =/

(Justin, do you remember where the conversation about the wikipedia case was?)
felt: Yes, this feature is Finch-controlled so if we can't make the respin that's an option.
+awhalley@ (Security TPM)

Pls apply apporpriate OSs and milestone labels.
Labels: ReleaseBlock-Stable M-69
+govind for the question in comment 19.
Labels: M-71 M-70 Target-69
Re #19:
We're planning to cut M69 stable RC on Monday @ 1:00 PM PT for release on Tuesday if all goes well.
Labels: Target-70 Target-71 OS-Chrome OS-Linux OS-Mac OS-Windows
Assuming this bug applies to Desktop + Chrome OS. Pls add missing OSs label if any.
Yes, the OS labels are correct.
Issue 882154 has been merged into this issue.
Issue 882240 has been merged into this issue.
I have a CL that implements (2) and (4):

tommycli: can you try doing (5) tomorrow morning? It looks like a trivial change but it looks like there may be some process to updating effective_tld_names.dat.
+cc: pam and rsleevi (OWNERS of effective_tld_names.dat as a heads up that we'd like to add
If you would like changes to the PSL, then the domain owner needs to request them - . We absolutely should not modify it without the domain owners explicit request. If this feature is using the PSL, then that's probably a bug.
Using the PSL for the functionality concerns (not eliding a meaningful label in the eTLD+1 for sites that essentially tell us where the meaningful label boundary should be) seems reasonable to me. It's definitely better than telling sites that they retroactively shouldn't have allowed registering "www" or "m" subdomains.

What are the security concerns of relying on the PSL? Do we know why hasn't included themselves in the PSL (not sure we could get this context quickly though)?
rsleevi: Do you have any insight into why is not currently on the PSL?
Because that would prevent setting cookies for and prevent single-sign-on through the Tumblr properties. This would functionally break that (among who knows what else)

Put differently: We shouldn't be adding new dependencies on the PSL, and we shouldn't assume any security boundaries on the PSL. The PSLs contents of third-party sites is best-effort, and related to cookies' broken model. As we work to correct the issues of cookies, the goal would be to remove the PSL. So if there are security features that are assuming eTLD+1, and that we can accurately determine that, we cannot do that today.
Eliding domains from the left using the PSL seems in-line with other browsers' use of the PSL though: Firefox and IE both use it for highlighting the eTLD+1 (leaving true subdomain labels un-highlighted).

Having a way to track eTLD+1 seems fundamentally useful for URL display efforts (and is included in our current display guidelines). Perhaps the PSL is not sufficiently reliable to use it for eliding (vs. highlighting) though.
Yeah, I think that's a good summary. It's "reliable enough" for highlighting - but noting that the highlighting can be bypassed (e.g. adversarial addition to the PSL), but not reliable enough for eliding/removing information.
It's my understanding that (2) alone (from #10) will address the security concern here. The attack we're concerned with is the one described by mgiuca in #2 and expanded upon in #4. If we only elide leading subdomains that attack is not possible. This change (2) is not related to use of the PSL.

I'd still like to proceed with (4) as well. The motivation for this change is not security but instead to avoid confusion in the case where a private registry allows users to own the "www" or "m" subdomains.

Regarding the PSL not being reliable enough for eliding/removing information, note that we only use the PSL to choose cases where we *don't* elide. So adversarial addition to the list isn't a concern, I think.
rsleevi@, carlosil@, and I discussed this offline some:

The concern is that relying on the PSL to enumerate private registries will always have false negatives (i.e., we will elide subdomains we should not, like the tumblr case). There are sites that _cannot_ put themselves on the PSL, and sites that simply haven't yet put themselves on the PSL.

While I agree that we don't need to worry about the adversarial addition case here, the question is whether the PSL sufficiently covers the user-registrable cases. The conflation of the PSL with cookie policies makes it feel like we'd need a separate exception list, and we'd still be fighting a losing battle to keep track of all the exceptions.

Prior efforts to add explicit notation for what the eTLD+1 is for a domain to the DNS or other operator controllable locations haven't succeeded.

One counterpoint I offered was that sites like that offer full custom theming to their subdomains already expose their main-origin cookies to subdomain scripts, so a subdomain would likely not need to resort to phishing the main domain at all. But that is not true of all sites that offer subdomain registration (they may strictly limit content to prevent arbitrary script access to users).

We feel that we would need an affirmative source of "you can elide subdomains" data to avoid the cases that have been raised, rather than an "opt out" (which is what the PSL is being proposed to be here).
cthomp: Thanks for the thorough explanation, that's helpful.

The concern you raise about use of the PSL here is about potential user confusion, though, correct? I believe there's not a security concern as long as we only elide from the leading/left/leaf end. Since is a false negative, the case will continue to be a problem. But any spoofing there (or on a similar site with script access limitations) is possible today. If I visit and it tries to spoof a Tumblr auth page, I'm not going to be protected by the browser UI displaying the "m." because my expectation is that "" is owned by
I'm not sure that an active tumblr user would fall for it, if they checked the URL (that second condition is definitely a broader issue with URLs), as they are more intimately used to seeing the individual sub-site identity in the leading subdomain label. But at best we could understand this statistically through user research (rather than it being a truism either way).

But yes, the concern here is this feature enabling user confusion and potential phishing attacks on sites that have registrable subdomains but for whatever reason are not on the PSL. I think it may be dangerous to break origin display without a robust alternative.

FYI, we're reaching out to Tumblr offline to ask if they'll add themselves to the PSL (and if not, why not). 

I agree with c#40: proceeding with (4) seems strictly better than not proceeding. 
The other alternative is to Finch off the elision launch entirely if we think the false positive concerns on trivial subdomains is too high.
I'd like to push back on comment #44 - As captured in our meeting, the PSL is not intended for this use case, and the assumption that sites should be on the PSL if they don't want this behaviour (i.e. opt-out) seems like a change that is inconsistent with our Blink Process on shipping changes ( ). That is, much in the same way that CORS was introduced as an opt-in, rather than as an opt-out, I think we may be underestimating the risk of confusion by relying on an out-out mechanism. From the PSL side, we've seen this constantly.

As it relates to PSL updates, they are presently treated as best effort on a volunteer basis, and merged prior to Beta/Stable promotions if there have been meaningful changes. As such, we're very clear in communicating that sites should expect 3-6 months before the PSL changes are distributed in a number of clients, and up to 18 months for removals. A feature that encourages rapid addition (to mitigate risk or to add features) is generally detrimental to both sites (who make mistakes in their additions) and consumers of the PSL (who get increased requests to update their copy).

I'd like to encourage Finch-ing off and revisiting some of the assumptions, to see if there is something that can be safely launched in the future.
I think the folks talking here have two different scopes of remedy in mind:
(1) Keep trivial subdomain eliding but try to patch it until it is "good enough"
(2) Remove trivial subdomain eliding until such time as we can make it "good enough"

(Obviously, depending on your definition of "good enough", you may lean more toward (1) or (2) by default.)

If you are assuming (1), that we want to keep this but try to bandaid it, then using the PSL here is strictly better than not using it.  Comment 41 is correct that we will still have false negatives that cause us to elide in cases where we arguably shouldn't.  But the alternative is to _always_ elide in those cases, which is much worse.  In camp (1), the question is not, as comment 41 puts it, "whether the PSL sufficiently covers the user-registrable cases", but rather whether using the PSL covers any of those cases at all, without adding false positives.  The answer is that it does, so we should use it, while still remaining open to other signals/heuristics if people think of them.  I think this is what comment 40 is arguing.

However, if you're arguing for (2), the concern is that using the PSL still doesn't get us to the point where we never elide a case where the full hostname was meaningfully different than the elided one (for a suitable definition of "meaningfully").  Therefore, we need to unship this, and we need to build some reliable source of that data before we can re-ship it.  I think this is what comment 41 is arguing.

My personal take is that both positions have some merit:
* Stripping "www." is consistent with general user-confusion over the meaning/value of this, largely harmless, and consistent with longstanding behavior in Chrome that does things like autocompleting by silently prepending "www." when the user hasn't typed it to find history matches (which only makes sense if the two forms are "equivalent").  The supposition is that this stripping has some user benefit, though honestly I've never seen this quantified, merely asserted without proof as "obvious".
* "largely harmless" means "occasionally harmful", and even with all heuristics suggested above, we're clearly going to be eliding in ways that change the meaning sometimes.  Given that the only way to know for sure if "" and "" are the same is to try to fetch them, we can never build a perfect heuristic for this ahead of time.  The user benefit here is questionable -- while schemes are pretty noisy, "www." doesn't really seem to make it harder for people to parse domain names -- so we lose little from reverting this.  Finally, reducing the frequency and severity of text shifting on text selection would be great.

If I were making the call, I'd unship entirely; my suggestions in comment 10 were based on a "camp (1)" assumption, but if people are willing to entertain more severe remedies, I think unshipping is better than patching.  But I could live with patching.
Project Member

Comment 48 by, Sep 10

The following revision refers to this bug:

commit 1883bad43889c9f0a5d7f6ca3dabff648c73bb90
Author: Tommy C. Li <>
Date: Mon Sep 10 18:20:46 2018

[omnibox] Updates to handling of trivial subdomains.

1. Revert the change where we decided to ignore eTLDs
( We figured that none of these would allow
"www" or "m" as a subdomain but the case proves this
2. Only strip leading trivial subdomains.

Bug: 881694
Change-Id: Ia8b7bb02611edc4d6c6c2c634f428dee0bff11e2
Reviewed-by: Tommy Li <>
Reviewed-by: Justin Donnelly <>
Reviewed-by: Carlos IL <>
Commit-Queue: Tommy Li <>
Cr-Commit-Position: refs/heads/master@{#589985}

Labels: Merge-Request-70 Merge-Request-69
Hey all, Justin's CL to fix the most prominent issues just landed in c#48.

I'm requesting a merge to 69 and 70 here.
How safe is the change to merge to M69 without canary coverage for M69 release tomorrow?

Also, seems like we've finch option here. Is it ok to use finch for this and pick up cl listed at #48 to future M69 respin (if any) so by then change will be well baked in canary and safe to merge?
Project Member

Comment 51 by, Sep 10

Labels: -Merge-Request-69 Merge-Review-69 Hotlist-Merge-Review
This bug requires manual review: Request affecting a post-stable build
Please contact the milestone owner if you have questions.
Owners: amineer@(Android), kariahda@(iOS), cindyb@(ChromeOS), govind@(Desktop)

For more details visit - Your friendly Sheriffbot
Update from discussion by chat with govind@ and others. I believe that this (CL in #48) is safe to merge for the following reasons:

- The change is narrow and well covered by unit tests.
- While the change is well covered by unit tests, it's actually difficult to exercise in practice so manual testing in the next Canary wouldn't help much.
- The code path that this touches is entirely bypassed when the feature is disabled by Finch (see [1] below). If we don't include this change in today's respin, our only other option is to disable the feature via Finch. So we're totally fine with disabling via Finch if it should turn out that merging the CL causes unforeseen issues. (And, of course, as pkasting's summary in #47 lays out nicely, we may choose to still disable via Finch even if the merge happens and doesn't introduce any issues.)

Labels: -Merge-Request-70 -Merge-Review-69 Merge-Approved-70 Merge-Approved-69
Approving merge to M69 branch 3497 for CL listed at #48 based on comment #53 and per offline group chat. Please merge ASAP. Thank you.

Also approving merge to M70 branch 3538 (+ abdulsyed@ as FYI)
Project Member

Comment 55 by, Sep 10

Labels: -merge-approved-70 merge-merged-3538
The following revision refers to this bug:

commit cf57b39acc29cace17d729e8be6e88e1f55c1a34
Author: Tommy C. Li <>
Date: Mon Sep 10 19:07:04 2018

[omnibox] Updates to handling of trivial subdomains.

1. Revert the change where we decided to ignore eTLDs
( We figured that none of these would allow
"www" or "m" as a subdomain but the case proves this
2. Only strip leading trivial subdomains.

Bug: 881694
Change-Id: Ia8b7bb02611edc4d6c6c2c634f428dee0bff11e2
Reviewed-by: Tommy Li <>
Reviewed-by: Justin Donnelly <>
Reviewed-by: Carlos IL <>
Commit-Queue: Tommy Li <>
Cr-Original-Commit-Position: refs/heads/master@{#589985}(cherry picked from commit 1883bad43889c9f0a5d7f6ca3dabff648c73bb90)
Cr-Commit-Position: refs/branch-heads/3538@{#238}
Cr-Branched-From: 79f7c91a2b2a2932cd447fa6f865cb6662fa8fa6-refs/heads/master@{#587811}

Labels: -Merge-Approved-69
My CL can't be merged cleanly to M69 so I'm removing the merge request.
Discussions are ongoing about how to proceed. I'll update this bug soon.
Based on comments #56, #57 and offline group chat, we're Not considering this as blocker for M69 Desktop respin tomorrow.
Labels: -M-69 -Target-69
Hrm.  How hard is getting the merge done?  If we don't include in the respin tomorrow, are we going to Finch this off?  I feel like doing something soon would be good, so either FInching off, merging manually, or delaying the respin by a day would all be better than doing nothing, but maybe I don't have good perspective.
Agreed, "do nothing" is not an option. More details shortly.
Here's the new plan:

- I'm going to turn off this feature via Finch today.
- We're planning to proceed instead with 'www' (not 'm') elision in M70, with the fixes for (2) and (4).
- Plans for how to deal with 'm' going forward are under discussion.
- We will send an e-mail to chromium-dev that explains the change in plans.
Also, tommycli is going to add (3) to M70 as well.
The Finch change to disable in M69 has landed: http://cl/212352740.
> We're planning to proceed instead with 'www' (not 'm') elision in M70, with the fixes for (2) and (4).

What's the rationale for eliding "www" but not "m"? (Based on the above discussion, it seems like it could be " is a user's blog, but is the Tumblr home page". That isn't a great reason because it applies only to a single site; we don't know if there are other non-PSL sites that have "www" as an available username.)

Given the above discussion, I'm less happy with (4) as a solution, given that the PSL is not supposed to be relied upon for security. Note that the concept of "public suffix" was recently added to the URL standard ( but with a big red caveat: "Specifications should avoid depending on "public suffix", "registrable domain", and "same site"." Basically, they put it in so they could specify historical rules, but strongly discourage any new things from using it.
The rationale for 'www':
I think it's very widely understood that you should not let users control the 'www' subdomain. There are already other places in Chromium code where 'www' is treated as equivalent to the root domain. If another big site were to tell us that they gave out 'www' to an end user, I think it would be pretty clear that (a) they shouldn't have done that, and (b) they should undo it.

I had thought that this would also be true for 'm', but apparently it might not be. So we'll investigate that further before proceeding (or not) with 'm'.
Summary: www subdomain is removed even when it isn't the leftmost subdomain (was: www subdomain is removed even when it isn't the root subdomain)
Now that the feature is finched off, there is no more live security risk. I'm going to un-restrict this bug.
Labels: -Hotlist-Merge-Review -Restrict-View-Google -Restrict-View-SecurityTeam -ReleaseBlock-Stable -merge-merged-3538
I'm not sure that either (a) or (b) are true. Or at least, not part of any public messaging we've done, nor part of any security best-practices to date. I would not be surprised if a number of sites on the PSL don't have any such blacklists for hosts as 'special', and we know that some providers (such as Amazon) are built in such a way that 'www' for Cloud domains is not at all special, and can be controlled different from the entity of the parent domain.

If we are going to say 'www' is special, then from a process standpoint, we should likely establish a way to document and signal that clearly (IETF? Chrome Developers site? WHATWG post?), as well as a process to declare other domains are (retroactively?) special.
FWIW, I also don't agree that the assertions in comment 67 are true (although I do think sites which treat "www." differently from its parent are making a mistake).  I don't think we can rely on this as justification for eliding "www." in M70.

Regarding comment 66, I don't view this as "relying on the PSL for security", but simply "reducing the number of cases where we incorrectly mangle what we show users".
This feature is likely in violation of RFCs. A browser should not make preferential judgment calls about the naming methodology for the internet, especially where security, user training & perception, URL redirection, URL rewrite and other considerations all hinge on how this presently functions. There ultimately is no such thing as a trivial subdomain.
@74 Read this thread. They switched off the feature entirely while they work on a longer-term fix.


Now that -Restrict-View-SecurityTeam is no longer on this bug I expect the signal to noise ratio in the comments is about to go way down. So before Restrict-AddIssueComment-EditIssue inevitably ends up being necessary I'd like to raise the possibility of eliding subdomains based on other huristics.

Could eliding possibly be done on an opportunistic basis by making a request to the apex and checking for rel=canonical or rel=alternate links? Or perhaps checking that the TLS certificate for the domain explicitly includes both www and the apex in its list of common names?

Just floating some ideas. Long-term a standards-based solution would be preferable; maybe some kind of DNS record or something for indicating alternative domains.
Interesting ideas.

What is even the problem being solved with domain manipulation?

(Knowing that will help inform the possibilities.)
Anecdotally, I've noticed that the URL bar looks a lot cleaner without all the "https://www." boilerplate tacked on to the front of every domain, so it may have just been for asthetic reasons. I'd also speculate there may be security benefits to this, increasing focus on the more important parts of a domain name by removing clutter from the URL bar.

I'm not on the Chrome team though, so I'm only guessing. I echo your question: what was the Chrome team's original motivation for this feature?
The goal of the project is to emphasize the meaningful parts of the URL, and de-emphasize the less meaningful parts. On mobile in particular, "https://www." takes up half of the viewport before you get to the meaningful part of the URL.

@71, @72: If you give someone the ability to control 'www' and they put up a phishing domain for your page, you are screwed. Even the most expert user (let's say Tavis) will be fooled like that. So regardless of whether we elide 'www' or not, it's bad practice w/r/t phishing to give out that subdomain to other folks.
Not all domains are used by the public. It is entirely reasonable not to want Chrome to misleadingly mangle well-formed URLs. As the PSL is designed for a different purpose and cannot sensibly be re-used here, I do not see a reasonable way for this feature to work at the present time.

Without a canonical means of determining the equivalence of allegedly "trivial" subdomains to the main domain I could only support the use of an explicit opt-in.  Anything else is Chrome overstepping its "authority".
> The goal of the project is to emphasize the meaningful parts of the URL, and de-emphasize the less meaningful parts. On mobile in particular, "https://www." takes up half of the viewport before you get to the meaningful part of the URL.

This might make sense if there was a way to get back to the full url in the address bar. The present implementation makes that highly difficult if not impossible. As far as I can tell, the only way to get the full url with this feature enabled is to copy it to the clipboard. 

Safari does this by hiding most of the URL until a user clicks on the address bar and then showing the full url. That is significantly better than the what chrome was doing. 

Firefox uses highlighting while still showing the full url. Again, much better than only providing the elided url in the UI. 

If chrome is going to continue down this path, there has to be a reliable means to see the full url that does not involve the clipboard and a text editor.

#80: the full URL is revealed on double-click, click-drag, or any mouse or keyboard interaction with the omnibox after it is focused.
re #80: If you click on it to edit, you can see the full URL. We can explore making this more discoverable.
I'm not really a fan of the address bar hiding part of the url until it is clicked, particularly when I'm at my main computer. I often find myself trying to change a small piece of a URL in the address bar, and the text changing in the address bar has its issues when I do that.

For example, if I'm looking at a YouTube channel with a easy URL and I want to go to another one that I know the URL of. I can double click the username in the address bar, then type another username. If I click in the address bar once and the text changes position because "https://www." is now showing, my double click ends up being read at a different location in the address.

The way Chrome seems to be working is that it waits for a second click before showing the rest of the URL, I end up highlighting the correct word, but then it moves from underneath my cursor afterwards, which is a bit better for this specific purpose, but still somewhat distracting.

On the flip side of the coin, it does look cleaner, especially on mobile where screens are small.
re: #82 Interesting, the initial click does not reveal the url, but subsequent clicks or keyboard interactions do. That's clunky and not intuitive to an experienced user, though not as bad as it looked at first. 
There's also some funky stuff going on with initial click and subsequent clicks on the Omnibox with the favicon and HTTP/HTTPS lock icons. See

If the Chrome team decided to just completely strip the www subdomain off of all URLs before even making the request, I would be more supportive of the change. So if a user typed in, the browser would navigate to

I would disagree with the change, but at least I could see more justification for that -- if www is *actually* trivial, then it should be OK to actually remove it. That would also significantly decrease the phishing risks that people are worried about, because if a site decided to engage in bad practice and distribute the www subdomain, that subdomain just wouldn't be reachable.

You could even approach it in a staged way for sites that are still relying on it:
- check the url without www.
- if it resolves, serve that and completely strip www.
- if it does not resolve, check the url with www.
- if it resolves, serve that and still strip www.

This would also impose a speed penalty on sites that served content under www, which would be an extra encouragement to switch off of the "outdated" subdomain.

But for the same reason why I suspect many people, including me, would feel nervous about making the change I just described, I feel nervous about hiding the www. It seems like the browser is saying two things at the same time - www is important enough that we can't get rid of it, but it's not so important that people need to know about it.

I agree that it's bad practice for websites to serve different content on the www subdomain. But, it appears that some sites do - and that content is sometimes meaningfully different. So a change should either force them to update their websites, or it should let the user know that they need to pay attention to the subdomain. I don't think that hiding www does either.

I don't think the www domain is actually trivial. If it was, proposing a more drastic action wouldn't sound as problematic as it does. So I'll throw my hat into the ring for de-emphasizing or graying subdomains, but not outright hiding them.
#82: Even a reveal on hover (at least for desktop) would make things more clear.
(Aside: This has gotten off track from the original bug about the non-leftmost "www" subdomains being elided, to a more general conversation about eliding "www" at all. It's fine for this to continue, but is there another bug we can use to discuss that general case, to keep this one focused?)

I agree with #84, #85, #87: it's clunky to only reveal the hidden content after a second click; it makes selecting a specific part of the URL difficult because it jumps around as you click on it. I believe what we had before M69 (where we still elided the "http://", so we had the same issue) was sufficient: it would reveal the hidden content whenever the text was selected, so you could select a specific part of the URL by 1. clicking the Omnibox (revealing the unabridged URL), then 2. drag-selecting your substring. Now if you try that technique, as soon as you start drag-selecting, everything will move.

Revealing on hover might also help.
Really happy to see this derestricted so everyone gets to read the high-signal discussion. Noise/distraction can indeed make so very hard to get a good understanding of what's going on.

+1 for reveal on hover.

77: A small clarification about click-then-drag-to-select. I just fiddled around with this out of curiosity (and irritation - I've been stumped by it for years), and stumbled on the fact that there are actually four or so behaviors that can happen in the omnibox.

- If I click then drag (ie [mouse down, mouse up], [mouse down, move]) while keeping the mouse at the same coordinates, then

- a) if I take more than the double-click detection(?) threshold time, I'll select the whole URL on the first click and on the second click+drag Chrome will assume I want to drag the URL somewhere. (I can totally imagine new/elderly/disabled users with poor coordination dragging URLs onto pages, hopefully without filled-in forms on them. I totally may or may not have done the same 3 or 4 times while testing this... good thing this page uses window.onunload)

- b) if my click; click+drag takes _less_ time than the double-click threshold, the URL deselects and Chrome falls-thru to double-click-select-word mode.

- If I move the mouse more than 1 pixel but less than 5 pixels (?) between the click and click+drag steps the above will always use (b). If I use move more than 5 pixels Chrome will use (a). If I accidentally move while initially clicking the URL, then Chrome will think I want the next one:

- If I drag over the URL while the omnibox is not focused, I get standard selection behavior. In other news, I just got a new lease on life because I've been using Chrome for years and never knew this was how you did character-by-character selection. So I'd click, (wait for the double-click timer to elapse), click again, select. Heheh... ._. (I guess I just realized the model I picked up (circa Win3.1/Win95/Win98?) was click-in-the-box-to-focus-it, drag-in-the-box-to-select.) 


For those wondering what Finch is, I read a tiny bit about it at It's basically a feature-flag server. (One of these days I'm gonna have to add http://go/ and http://cl/ to my local nginx instance, although I don't know what I'll do with them :) )


Here's a totally random thought that I want to toss into the mix just in case it's useful. Extensive (normal) use of Google's autocomplete has yielded domain.tld/path/to/url suggestions on many occasions. Perhaps there's a useful dataset of URLs hiding in there that could be insightful to scrape through (and for all I know some percentage of what's in there may not meet display thresholds). The discussion that's taken place over the past couple days shows that this subdomain elision functionality has quite a bit of internal traction, and that may mean this dataset may be accessible to the effort. It's going to have probably the broadest list in existence, surely. And if this team is not already using this dataset for www-vs-no-www analytics...

(Ha, I wonder how many entries/hits it gets for "w" "ww" "www" "www."? And I've always been curious how many people still type "https://"...)

Google (and/or its subsidiaries) have signed contracts with ICANN via Registry Agreements and Registrar Accreditation Agreements. Parts of these contracts include requirements to uphold the stability and security of the DNS. By not properly displaying DNS records you may be in breach of these agreements (without explicitly and unequivocally explaining the security risks to users, possibly on every use). I would suggest you speak to the legal team from Charleston Road Registry and/or ICANN compliance. The Security and Stability Advisory Committee at ICANN have been informed of the Chromium team's intentions; it might be an idea to reach out.
I would urge everyone at Chromium who still seriously want to push through this feature - which in my personal opinion is flawed, especially on desktop versions of Chromium - to consider changing the Flag "omnibox-ui-hide-steady-state-url-scheme-and-subdomains" to a Policy so that enterprises could at least disable this "feature" on all clients at once.
There is another problem of ellisioning "trivial" elements is that some web operators do not have full control on their domain settings, and hence cannot redirect "WWW" to bare domain. Plus, Westerners are either accustomed to bare URIs or confusing WWW to the bare ones, while here it is always emphasized that to check for WWW (a real story I have with the ellisioned HTTPS and WWW is that our library has decided to switch the browser to Firefox until we disabled the flag because of the complaints of suspected SSL Stripping).
@74 Cannonical links might be a good candidate, as this makes it opt-in. However, on mobile it might be problematic (especially if the cannonical one is the version with the WWW).

DNS might be good, and we already have CNAME that states to tell the browser that whatever is the CNAME is the cannonical host name, but IPv4 has problems so we need to rely on hostnames to differentiate websites, and CNAME becomes nonsensical.

TLS might be good to think, but you still have that pesky and (thankfully, one down as it has been moved to
Re #92: What does eliding the www have to do with SSL Stripping?
Re #94: The subdomain is different, with either a separate or non-existent certificate. While stripping HTTPS is difficult (only catching new people) in the HSTS headers case on the root, if a domain doesn't have includeSubDomains enabled in their header (we don't for historical reasons, it's quite common), any visitor linked to http:// can go there without a certificate warning. And that subdomain can look/feel/masquerade as the non-www.

There's also subdomains that aren't there intentionally (a rogue DNS entry), this has the same set of problems - looking like the main site now. This one requires more moving pieces to execute from an attack standpoint (and it can be argued DNS is "game over" anyway), but this trivial change enables more attacks. I am failing to see how it prevents any, though.
@94 While I'm aware of the sillyness of the argument, both the removal of the "Secure" tag and the eliding of the domain, some people reacted (on my experience) that the padlock might be a illusion, especially that:

1) The protocol is also elided (leaving no clue that it is in HTTPS).
2) It elides the www prefix, alarming them even further.
Project Member

Comment 97 by, Sep 12

The following revision refers to this bug:

commit b0da94c0b467fd881527f216b1511d6c1cdeaf10
Author: Tommy C. Li <>
Date: Wed Sep 12 16:45:45 2018

Omnibox: Only strip first www and nothing else.

This CL makes two changes:
 1. Removes stripping "m." on Android.
 2. Only strips the very first "www.", not any further ones.

Bug: 881694
Change-Id: I60c3b3fe274d72a1317cc2f5bbca8a63ba3ffd7e
Reviewed-by: Jochen Eisinger <>
Reviewed-by: Justin Donnelly <>
Cr-Commit-Position: refs/heads/master@{#590717}

Can you check if bug in @74 is not an issue after the above changes?
#98: Thanks for drawing attention to this case. I've tried it in the latest Canary (71.0.3550.0 on Windows) and with a build from head that includes the change in #97. In both cases, the double-click selection in the case is now correct.
Labels: Merge-Request-70
Merge request for the CL in #97.
#100: Let's verify this in canary first and revisit merge review.  
Per the chromium documentation:

"Furthermore, Chrome can only guarantee that it is correctly representing URLs and their origins at the end of all navigation. Quirks of URL parsing, HTTP redirection, and so on are not security concerns unless Chrome is misrepresenting a URL or origin after navigation has completed."

Why is this feature designed to "misrepresent a URL or origin after navigation has completed" then? Lying about the hostname to novices and power users alike in the name of simplifying the UI seems imprudent from a security perspective.

The identity of a page is not just contained its topmost levels of the domain name. This concept of eliding any part of an origin, not just its current implementation, is a serious issue that violates the trust a user expects to place in their user agent and the pages it navigates to.

The chrome area is sacred, intended to be implicitly trusted and not easily tampered with, and the browser should not erode the confidence users have been taught to place in it.
#102: Think of it as a problem of precision vs. accuracy, with the added twist that maximum precision can negatively impact effective accuracy and perceived meaning, as perceived by a person using Chrome.

It's certainly true that sometimes is different than, but I think more often they are the same. (I think this is especially true for the large head of hugely popular sites, even if in the long tail there are counter-examples. Anecdotally, I checked facebook, twitter, google, tencent, taobao, tumblr, and baidu.)

And if people can more readily distinguish "" from "" when the additional precision is elided than when it is not elided, then that's a net win, even though it's strictly less precise.

These are all empirical questions, of course. My point here is only to argue that precision is not *necessarily* the best way to achieve the goal of helping people know 'where' they are on the web.

(And it remains my view that expecting people to parse and understand URLs, origins, or hostnames is not a sustainable, scalable solution to the phishing problem.)
Better  show full Host Name  which Location The Server real last request .

such as
 type in ``  server return
   ask browser redirect to ``, show ``
  or ask redirect to `` show ``
 type in `` server return 200 show ``
 type in  `` server ask browser redirect to ``, show ``

 typein `` return 200 if you show ``  or typein `` return 200 if you show ``   both invalid!  
 users should know which host they access not domain.  

 so if users type in `` or `` will got Nothing(DNS error)

twitter redirect `` to ``
google redirect `` to ``

that's different

> GET / HTTP/2
> Host:
> User-Agent: curl/7.58.0
> Accept: */*
* Connection state changed (MAX_CONCURRENT_STREAMS updated)!
< HTTP/2 301 
< content-length: 0
< date: Thu, 13 Sep 2018 03:26:30 GMT
< location:
< server: tsa_a
< set-cookie: personalization_id="v1_R7Wkoj+UQciUm5uEpTLhMA=="; Expires=Sat, 12 Sep 2020 03:26:30 GMT; Path=/;
< set-cookie: guest_id=v1%3A153680919080218685; Expires=Sat, 12 Sep 2020 03:26:30 GMT; Path=/;
< strict-transport-security: max-age=631138519
< x-connection-hash: 4277365fd83f34e35b6c98c414a00a08
< x-response-time: 3
so show was except
but show `` as `` Not agree
single domain with single product such as twitter facebook  tumblr or github

muti product in sigle domain(by diffrent host or path)
such as google  (
Labels: Restrict-AddIssueComment-EditIssue
I'm restricting comments on this bug to project members. If you have general feedback on the feature, please direct it here:
Project Member

Comment 107 by, Sep 13

Labels: -Merge-Request-70 Merge-Review-70 Hotlist-Merge-Review
This bug requires manual review: Reverts referenced in bugdroid comments after merge request.
Please contact the milestone owner if you have questions.
Owners: benmason@(Android), kariahda@(iOS), geohsu@(ChromeOS), abdulsyed@(Desktop)

For more details visit - Your friendly Sheriffbot
TPMs: Fix appears to be working correctly in the new Canary (71.0.3551.0).
Labels: -Merge-Review-70 Merge-Approved-70
Project Member

Comment 110 by, Sep 13

Labels: -merge-approved-70 merge-merged-3538
The following revision refers to this bug:

commit ec8b2c935c437dbaddefb4722fae264e48a80288
Author: Tommy C. Li <>
Date: Thu Sep 13 20:53:23 2018

Omnibox: Only strip first www and nothing else.

This CL makes two changes:
 1. Removes stripping "m." on Android.
 2. Only strips the very first "www.", not any further ones.

Bug: 881694
Change-Id: I60c3b3fe274d72a1317cc2f5bbca8a63ba3ffd7e
Reviewed-by: Jochen Eisinger <>
Reviewed-by: Justin Donnelly <>
Cr-Original-Commit-Position: refs/heads/master@{#590717}(cherry picked from commit b0da94c0b467fd881527f216b1511d6c1cdeaf10)
Reviewed-by: Tommy Li <>
Cr-Commit-Position: refs/branch-heads/3538@{#386}
Cr-Branched-From: 79f7c91a2b2a2932cd447fa6f865cb6662fa8fa6-refs/heads/master@{#587811}

 Issue 883625  has been merged into this issue.
Showing comments 12 - 111 of 111 Older

Sign in to add a comment