New issue
Advanced search Search tips

Issue 836332 link

Starred by 3 users

Issue metadata

Status: Assigned
Owner:
Cc:
Components:
EstimatedDays: ----
NextAction: ----
OS: Chrome
Pri: 1
Type: Bug

Blocked on:
issue 839186



Sign in to add a comment

"ExtensionMessageBubbleTest.TestUninstallExtensionAfterBrowserDestroyed" is flaky

Project Member Reported by chromium...@appspot.gserviceaccount.com, Apr 24 2018

Issue description

"ExtensionMessageBubbleTest.TestUninstallExtensionAfterBrowserDestroyed" is flaky.

This issue was created automatically by the chromium-try-flakes app. Please find the right owner to fix the respective test/step and assign this issue to them. If the step/test is infrastructure-related, please add Infra-Troopers label and change issue status to Untriaged. When done, please remove the issue from Sheriff Bug Queue by removing the Sheriff-Chromium label.

We have detected 12 recent flakes. List of all flakes can be found at https://chromium-try-flakes.appspot.com/all_flake_occurrences?key=ahVzfmNocm9taXVtLXRyeS1mbGFrZXNyUQsSBUZsYWtlIkZFeHRlbnNpb25NZXNzYWdlQnViYmxlVGVzdC5UZXN0VW5pbnN0YWxsRXh0ZW5zaW9uQWZ0ZXJCcm93c2VyRGVzdHJveWVkDA.

Flaky tests should be disabled within 30 minutes unless culprit CL is found and reverted. Please see more details here: https://sites.google.com/a/chromium.org/dev/developers/tree-sheriffs/sheriffing-bug-queues#triaging-auto-filed-flakiness-bugs
 
Labels: Infra-Troopers OS-Chrome
Multiple linux-chromeos-rel trybots reporting disk I/O error. Is this an infra issue?

https://logs.chromium.org/v/?s=chromium%2Fbb%2Ftryserver.chromium.chromiumos%2Flinux-chromeos-rel%2F108605%2F%2B%2Frecipes%2Fsteps%2Funit_tests__with_patch_%2F0%2Flogs%2FExtensionMessageBubbleTest.TestUninstallExtensionAfterBrowserDestroyed%2F0

[ RUN      ] ExtensionMessageBubbleTest.TestUninstallExtensionAfterBrowserDestroyed
[14088:14088:0424/091317.018703:6855802497:WARNING:pref_notifier_impl.cc(23)] Pref observer found at shutdown.
[14088:14119:0424/091317.045837:6855829637:ERROR:connection.cc(1888)] Quota sqlite error 1802, errno 0: disk I/O error, sql: CREATE TABLE meta(key LONGVARCHAR NOT NULL UNIQUE PRIMARY KEY, value LONGVARCHAR)
[14088:14128:0424/091317.055874:6855839666:ERROR:gl_context_virtual.cc(39)] Trying to make virtual context current without decoder.
[14088:14128:0424/091317.057540:6855841333:ERROR:gl_context_virtual.cc(39)] Trying to make virtual context current without decoder.
[14088:14119:0424/091317.045897:6855829687:FATAL:connection.cc(1903)] disk I/O error
#0 0x00000571aa5c base::debug::StackTrace::StackTrace()
#1 0x00000569768b logging::LogMessage::~LogMessage()
#2 0x0000061bf989 sql::Connection::OnSqliteError()
#3 0x0000061bcda9 sql::Connection::Execute()
#4 0x0000061c29ec sql::MetaTable::Init()
#5 0x0000072e1e1a storage::QuotaDatabase::CreateSchema()
#6 0x0000072e1b8a storage::QuotaDatabase::EnsureDatabaseVersion()
Labels: -Infra-Troopers
Owner: thestig@chromium.org
Status: Assigned (was: Untriaged)
Hi Lei,

I looked at a few machines that saw this error, the disk space looks normal. Can you help someone who is familiar with the test to take a look? If it's indeed an infra issue, feel free to re-assign.
Cc: liaoyuke@chromium.org thestig@chromium.org
Owner: pwnall@chromium.org
reassinging to pwnall@ from sql/OWNERS to access the sqlite error.

liaoyuke: Would it be possible to take a bot offline and sanity check its disk + file system? Probably can't hurt, but it would take time.
Yes, and I'll investigate more if we're sure the failure is not caused by the test itself.

Comment 5 by hbos@chromium.org, Apr 25 2018

Labels: -Sheriff-Chromium
I've not seen any recent flakes, removing sheriff label, feel free to re-add if it's still flaking.
Status: WontFix (was: Assigned)
The flake list in the bug description doesn't show anything past 4/24, so I'm guessing the test was dying due to a real I/O error. Closing because the flakes went away.
Blockedon: 839186
Never mind, another burst of flakes came back.

dpranke@: What guarantees do we get on bots? Are the disks on some sort of fancy self-healing disk service (i.e. blocks are stored redundantly), or are they VMs on top of commodity hardware?

Would like to know this to figure out if the most likely cause is real I/O errors or a SQLite bug.
Status: Assigned (was: WontFix)

Sign in to add a comment