New issue
Advanced search Search tips
Note: Color blocks (like or ) mean that a user may not be available. Tooltip shows the reason.

Issue 796333 link

Starred by 6 users

Issue metadata

Status: Fixed
Closed: May 2018
EstimatedDays: ----
NextAction: 2018-05-21
OS: Windows
Pri: 2
Type: Feature

Sign in to add a comment

Inclusion of new DigiCert CT Log - Yeti

Reported by, Dec 19 2017

Issue description

UserAgent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.84 Safari/537.36

Steps to reproduce the problem:
This is the new DigiCert CT log operating on our updated infrastructure. The log is open to all public trusted CAs free of charge (just like log 2). This is our highly scalable log and has been tested up to about a billion certs. Note - this is actually five logs, sharded in one year increments based on when the certificate expires. 

The PEM is attached for all five logs. 
MMD is 24 hours for all logs

Contact info will be:
Phone: 801-633-8482
Authorized persons: Jeremy Rowley, Rick Roos, Dan Timpson, Wade Choules (all of us are on the ctops email  alias)

Here are the URL’s and their Certificate Expiry Range:   Dec 12 2017 00:00:00Z inclusive to Jan 01 2019 00:00:00Z exclusive   Jan 01 2019 00:00:00Z inclusive to Jan 01 2020 00:00:00Z exclusive   Jan 01 2020 00:00:00Z inclusive to Jan 01 2021 00:00:00Z exclusive   Jan 01 2021 00:00:00Z inclusive to Jan 01 2022 00:00:00Z exclusive   Jan 01 2022 00:00:00Z inclusive to Jan 01 2023 00:00:00Z exclusive

What is the expected behavior?

What went wrong?
All roots trusted in the NSS root store are included by default in this log.

Did this work before? N/A 

Chrome version: 63.0.3239.84  Channel: n/a
OS Version: 10.0
Flash Version:
997 KB Download

Comment 1 by, Dec 19 2017

Let me know if there's anything else I need to include. Thanks!
Components: Internals>Network>Certificate
Components: -Internals>Network>Certificate Internals>Network>CertTrans
Status: Untriaged (was: Unconfirmed)
Hi Jeremy - The log public keys (in DER) do not appear to be attached to this bug.

Comment 5 by, Dec 19 2017

91 bytes Download
91 bytes Download
91 bytes Download
91 bytes Download
91 bytes Download
Application looks good; we're going to track all 5 temporally sharded logs in this bug. Over to CT team for monitoring progress.

Comment 7 by, Jan 29 2018

Thank you for your request, we have started monitoring your log server.
Should no issues be detected, the initial compliance monitoring phase
will be complete on Apr 29 2018 and we will update this bug shortly 
after that date to confirm.

Comment 8 by, Jan 29 2018

I would like to amend the previous comment.  As we have been compliance monitoring the Yeti Logs since the beginning of the year, and the delay between the go ahead on Jan 5th and the official monitoring period beginning was simply me not noticing that update until today, we have discussed it and decided that the start date of the official monitoring period for the Yeti Logs should be Jan 6th 2018.

This means that the initial compliance monitoring phase will actually be complete on Apr 6 2018 and we will update this bug shortly after that date to confirm.

Comment 9 by, Mar 28 2018

We are adding the 'DigiCert Trusted Root G4' root to our CT server.


Status: Started (was: Untriaged)
These Logs have passed the initial 90 day compliance period and we will start
the process to add them to Chrome.

Comment 11 by, May 1 2018

For the last ~18 hours, my monitor has received a 500 Internal Server error on all requests to the 2018 shard's get-entries endpoint. The most recent STH it has been able to verify is dated 2018-04-30 14:00:57.573 UTC, which is more than 24 hours ago. The other shards' get-entries endpoints also appear to be down, but I haven't looked to see if they have blown their MMDs as well.
Cert Spotter has also been unable to retrieve recent entries from Yeti 2018.  The most recent STH it has been able to verify is timestamped Mon Apr 30 05:00:32 UTC 2018.

The most recent STH Graham Edgecombe has been able to verify is 2018-04-30 08:00:42 UTC ( and his monitor has marked Yeti 2018 as "Bad" (
After taking a preliminary glance at what Google's compliance monitor has seen (which I will follow up on in more detail), our monitor has not had any problems getting or verifying recent STHs from Yeti2018.  For the non-2018 Logs, we have seen sporadic get-entries errors, but we have eventually been able to get the necessary entries for merge delay checking.  However, we have been unable to add any certificates to Yeti2018 at all today (UTC), making us unable to do merge delay checks, so it would seem that something may be up with that shard.

I will investigate further tomorrow, and report back with more in-depth and properly verified findings on our end.  We do also have tooling that ingests entries from Logs, so we will check that to see if we are seeing the same issues with get-entries from the point of view of that tool.

Out of interest, you mention the most recent STH you have been able to *verify*... have you received more recent STHs that you have been unable to verify, or are all the STHs you've received verifiable?  Also, what do you mean by verify?  Are we talking just checking the signature verifies, or building the Log (thus relying on get-entries) and checking the hash matches too?
We're looking into it as well. All I know for now is that it was deployed
to AWS. AWS had some i/o issues that caused the cert to be down. The report
I received internally was that we were down for 4 hours so I'm not sure why
it was reported as down for more than 24. I'll get back to you when I know
more info.
For Cert Spotter, verifying an STH means that it has been able to use get-entries to build a tree with a root hash that matches the STH.  I have received more recent STHs with authentic signatures whose root hashes I have not been able to verify.

Does Google's compliance monitor use get-entries to verify an STH's root hash, or does it use consistency proofs?
The current compliance monitor doesn't use get-entries, but the other tooling I mentioned we have that ingests the entries from Logs does verify STHs in the way you mention - by building the tree.  We'll take a look tomorrow at what has been seen by that tool, to see if we have seen the same things as you :)
To be clear - the current compliance monitor doesn't use get-entries *to verify STHS* - it does use get-entries to check merge delays.
Our I/O on two of our three database nodes went down, which basically
destroyed the queries into the system as we couldn't get quorum. Because
quorum couldn't be verified, some people (most?) saw query timeouts.  One
of our alerts did not trigger properly so we missed the first alarm.
However, i think the downtime was closer to 4 hours. We are digging into
the logs and will give you more info once we find out when the I/O issue
started happening. There shouldn't have been any inconsistent tree heads
signed though.

We fixed it by updating the server types on AWS. Not sure why this fixed
it, but it apparently did.

Comment 19 by, May 1 2018

Yeti2018's get-entries endpoint is now working. My monitor was able to verify all the STHs it retrieved during the outage.

By "verify" I mean not only retrieving and verifying the signature on the STH but also retrieving the entries and checking that the hashes match. My monitor was never unable to fetch a STH.

I believe this constitutes a blown MMD because DigiCert's logs impose a 12-hour delay between submission and inclusion, so a certificate that was submitted to Yeti2018 12 hours before the most recent verified STH during the outage (i.e., 2018-04-30 02:00:57 UTC) would not have been available at the get-entries endpoint until the outage ended a couple hours ago - the last failure my monitor saw was at 2018-05-01 19:13:42 UTC. That's a span of more than 24 hours.
I don't think this is accurate. The MMD is a promise to incorporate the
cert in a log by a certain time, not that the signed head will be available
to any particular monitoring service. However, I can't say for sure that we
produced a new STH within the 24 hours. We're looking into it to see
whether the STH simply wasn't available or never got signed.  If the STH
did get signed and simply didn't respond to the query, then the MMD wasn't
really missed, there was "just" significant down-time. Whether the STH did
exist should be fairly easy to prove once we can pull the logs.

Comment 21 by, May 1 2018

My monitor did not see downtime on the get-sth endpoint, and never received a STH that was more than 24 hours old. 

RFC6962, section 7.1 states in part: "Misissued certificates that do have an SCT from a log will appear in that public log within the Maximum Merge Delay, assuming the log is operating correctly.  Thus, the maximum period of time during which a misissued certificate can be used without being available for audit is the MMD."

All I'm saying was that I believe there were certificates submitted to Yeti2018 that weren't available for audit within 24 hours of their submission. In my opinion this violates the spirit of a maximum merge delay.
I don't think it violates the spirit of the MMD.

Section 3 of 6962: "The SCT is the log's promise to incorporate the
certificate in the Merkle Tree within a fixed amount of time known as the
Maximum Merge Delay (MMD)."

Section 7.3 specifies exacatly when a log is bad: "by failing to
incorporate a  certificate with an SCT in the Merkle Tree within the MMD".
If the SCT existed, even if some audit systems couldn't reach it, then the
MMD was met and the log wasn't mis-behaving.

A failure to meet availability expectations is a separate issue - one that
touches browser policy concerns rather than log misbehavior.
NextAction: 2018-05-04
Unfortunately our tool that gets entries from Logs had not been configured to ingest from the Yeti Logs, so we are unable to confirm/deny the get-entries downtime for Yeti2018.

What I can confirm is the following:
- We successfully retrieved STHs from Yeti2018 all of 2018/05/01 (YYYY/MM/DD).  Usually the STHs we receive from Digicert Logs contain a timestamp of ~12hrs before the time we receive them.  Interestingly, at around 10am (UTC) on 2018/05/01 the STHs suddenly became exceedingly fresh - the first fresh one we recieved at 2018/05/01 10:03 has a timestamp for 2018/05/01 10:01.  The freshness of STHs then gradually decreased over the next 9 hours, until they were back to being ~12hrs old.  In the interest of transparency, I'd be interested to know if there was anything that happened on the Digicert end at 10am (UTC) that would explain this?  There's nothing wrong with it of course, it was just an interesting find.
- Between 2018/05/01 10:17 (UTC) and 2018/05/01 20:18 (UTC) we were unable to submit certificates to Yeti2018.
Hi Kat, I'm currently working on the post-mortem right now and will send it to the CT policy group soon. The faster refresh of STHs is due to the architecture of the logs and them trying to always keep 12 STHs signed and ready to be delivered. When one takes longer than an hour to sign due to an issue, the others will be signed faster to get back to 12.  I'll include more of those details in the post-mortem, along with details on the success and failure rates of the API calls during the outage. We did have a new submission requests succeed all throughout the outage on all the Yeti logs, albeit at a much lower rate due to the database issue.
The post-mortem has been posted to the CT policy group here!topic/ct-policy/EKB9ycLsMk0
A small correction to my last message - all the times stated there were UTC+01:00.  Apologies for this, this was a by-product of living in a city where it is UTC for half of the year, and forgetting that we've now changed to BST :)

This means that what we observed on the STH front matches what was reported in Rick's post-mortem.
The NextAction date has arrived: 2018-05-04
Labels: Needs-Feedback
As we've posted to the thread mentioned in Comment 26, we do not consider the incident noted in the post-mortem to be a disqualifying event.

With that, Yeti is ready to be added to Chrome's trusted log list, but there are currently too many CT Logs operated by DigiCert. Per discussion on ct-policy, Log Operators are being limited to 3 separate Logs or Shard Groups.

Can you provide feedback on which Logs should be removed to add Yeti and Nessie? Additionally, we will need to specify a disqualified_at date in the near future and will disqualify them after they've stopped issuing new SCTs.

Currently qualified:
DigiCert Log Server (
DigiCert Log Server 2 (
Symantec Log (
Symantec Vega Log (
Symantec Sirius Log (

Pending qualification:
DigiCert Yeti 20XX group
DigiCert Nessie 20XX group
You can disqualify the following:

Symantec Log (
Symantec Vega Log (
Symantec Sirius Log (

We announced that the logs would be discontinued both here and on the Mozilla forum. We also checked and all certs visible in these logs appear in a mirror Google log.
Project Member

Comment 32 by, May 17 2018

The following revision refers to this bug:

commit 4508ef53b34e90e445e60dda8fd45f50e89d84dc
Author: Devon O'Brien <>
Date: Thu May 17 23:41:09 2018

Add DigiCert Yeti to Trusted CT Logs

The following CT Logs have passed their monitoring period an are being
added as trusted Logs in Chrome:

DigiCert Yeti2018, Yeti2019, Yeti2020, Yeti2021, Yeti2022

Bug:  796333 
Change-Id: I83f7238ed90f2e370c7cf1fe196b6f48862f7d10
Commit-Queue: Ryan Sleevi <>
Reviewed-by: Ryan Sleevi <>
Cr-Commit-Position: refs/heads/master@{#559734}

NextAction: 2018-05-21
The NextAction date has arrived: 2018-05-21
Labels: Merge-Request-67 M-67
Requesting merge to M67. Data only change (which I just double checked in source) and I've also verified correct CT functioning in 68.0.3436.0, and UMA metrics look as expected.
Project Member

Comment 36 by, May 21 2018

Labels: -Merge-Request-67 Merge-Review-67 Hotlist-Merge-Review
This bug requires manual review: We are only 7 days from stable.
Please contact the milestone owner if you have questions.
Owners: cmasso@(Android), cmasso@(iOS), kbleicher@(ChromeOS), govind@(Desktop)

For more details visit - Your friendly Sheriffbot
Labels: -Merge-Review-67 Merge-Approved-67
Approving merge to M67 branch 3396 based on comment #35. Please merge ASAP so we can take it in for this week last M67 Beta release on Wednesday. Thank you.
Status: Fixed (was: Started)
Project Member

Comment 39 by, May 21 2018

Labels: -merge-approved-67 merge-merged-3396
The following revision refers to this bug:

commit cdf0c4e95b6db65b4bcbfbee79cc3d2faf3b448a
Author: Andrew R. Whalley <>
Date: Mon May 21 17:52:38 2018

Add DigiCert Yeti to Trusted CT Logs

[67 merge] The following CT Logs have passed their monitoring period an are being
added as trusted Logs in Chrome:

DigiCert Yeti2018, Yeti2019, Yeti2020, Yeti2021, Yeti2022

(cherry picked from commit 4508ef53b34e90e445e60dda8fd45f50e89d84dc)

Bug:  796333 
Change-Id: I83f7238ed90f2e370c7cf1fe196b6f48862f7d10
Commit-Queue: Ryan Sleevi <>
Reviewed-by: Ryan Sleevi <>
Cr-Original-Commit-Position: refs/heads/master@{#559734}
Reviewed-by: Andrew Whalley <>
Cr-Commit-Position: refs/branch-heads/3396@{#663}
Cr-Branched-From: 9ef2aa869bc7bc0c089e255d698cca6e47d6b038-refs/heads/master@{#550428}

Comment 40 by, May 23 2018

Thank you for getting this merged into Chrome so quickly as this allows these logs to be used for non-EV certs right away.
We are adding the ' EV Root Certification Authority RSA R2' root to all Yeti logs.

We are adding the 'Atos TrustedRoot 2011' root to all the Yeti logs.


Sign in to add a comment