nvmem ec test crashes |
|||||||||||
Issue descriptionSeen on https://uberchromegw.corp.google.com/i/chromeos/builders/peach_pit-release/builds/2521/steps/UnitTest/logs/stdio Looks like an intermittent failure, the test passed on retry. chromeos-ec-0.0.1-r3794: ====== Emulator output ====== chromeos-ec-0.0.1-r3794: No flash storage found. Initializing to 0xff. chromeos-ec-0.0.1-r3794: No RAM data found. Initializing to 0x00. chromeos-ec-0.0.1-r3794: chromeos-ec-0.0.1-r3794: chromeos-ec-0.0.1-r3794: chromeos-ec-0.0.1-r3794: chromeos-ec-0.0.1-r3794: --- Emulator initialized after reboot --- chromeos-ec-0.0.1-r3794: chromeos-ec-0.0.1-r3794: [Reset cause: power-on] chromeos-ec-0.0.1-r3794: chromeos-ec-0.0.1-r3794: Console input initialized chromeos-ec-0.0.1-r3794: [0.000269 SW 0x04] chromeos-ec-0.0.1-r3794: chromeos-ec-0.0.1-r3794: Running test_corrupt_nvmem...nvmem_find_partition:300 partiton 0 verification FAILED chromeos-ec-0.0.1-r3794: chromeos-ec-0.0.1-r3794: nvmem_find_partition:300 partiton 1 verification FAILED chromeos-ec-0.0.1-r3794: chromeos-ec-0.0.1-r3794: [0.009161 nvmem_find_partition: No Valid Partition found, will reinitialize!] chromeos-ec-0.0.1-r3794: chromeos-ec-0.0.1-r3794: [0.053319 Active Nvmem partition set to 1] chromeos-ec-0.0.1-r3794: chromeos-ec-0.0.1-r3794: OK chromeos-ec-0.0.1-r3794: chromeos-ec-0.0.1-r3794: Running test_fully_erased_nvmem...nvmem_find_partition:300 partiton 0 verification FAILED chromeos-ec-0.0.1-r3794: chromeos-ec-0.0.1-r3794: nvmem_find_partition:300 partiton 1 verification FAILED chromeos-ec-0.0.1-r3794: chromeos-ec-0.0.1-r3794: [0.129293 nvmem_find_partition: No Valid Partition found, will reinitialize!] chromeos-ec-0.0.1-r3794: chromeos-ec-0.0.1-r3794: [0.155206 Active Nvmem partition set to 1] chromeos-ec-0.0.1-r3794: chromeos-ec-0.0.1-r3794: OK chromeos-ec-0.0.1-r3794: chromeos-ec-0.0.1-r3794: Running test_configured_nvmem...[0.155226 Active Nvmem partition set to 1] chromeos-ec-0.0.1-r3794: chromeos-ec-0.0.1-r3794: OK chromeos-ec-0.0.1-r3794: chromeos-ec-0.0.1-r3794: Running test_write_read_sequence...OK chromeos-ec-0.0.1-r3794: chromeos-ec-0.0.1-r3794: Running test_write_full_multi...Stack trace of task 6 (<< test runner >>): chromeos-ec-0.0.1-r3794: #0 /lib64/libc.so.6(__open64+0x2d) [0x7fb31518ea2d] chromeos-ec-0.0.1-r3794: ??:0 chromeos-ec-0.0.1-r3794: #1 /lib64/libc.so.6(_IO_file_open+0x8e) [0x7fb31511fdfe] chromeos-ec-0.0.1-r3794: ??:0 chromeos-ec-0.0.1-r3794: #2 /lib64/libc.so.6(_IO_file_fopen+0xf0) [0x7fb31511ff60] chromeos-ec-0.0.1-r3794: ??:0 chromeos-ec-0.0.1-r3794: #3 /lib64/libc.so.6(+0x69e55) [0x7fb315113e55] chromeos-ec-0.0.1-r3794: ??:0 chromeos-ec-0.0.1-r3794: #4 build/host/nvmem/nvmem.exe(get_persistent_storage+0x81) [0x4070c3] chromeos-ec-0.0.1-r3794: /build/peach_pit/tmp/portage/chromeos-base/chromeos-ec-0.0.1-r3794/work/platform/ec/chip/host/persistence.c:38 chromeos-ec-0.0.1-r3794: #5 build/host/nvmem/nvmem.exe() [0x406806] chromeos-ec-0.0.1-r3794: /build/peach_pit/tmp/portage/chromeos-base/chromeos-ec-0.0.1-r3794/work/platform/ec/chip/host/flash.c:39 chromeos-ec-0.0.1-r3794: #6 build/host/nvmem/nvmem.exe(flash_physical_write+0x85) [0x406974] chromeos-ec-0.0.1-r3794: /build/peach_pit/tmp/portage/chromeos-base/chromeos-ec-0.0.1-r3794/work/platform/ec/chip/host/flash.c:79 chromeos-ec-0.0.1-r3794: #7 build/host/nvmem/nvmem.exe() [0x40c78d] chromeos-ec-0.0.1-r3794: /build/peach_pit/tmp/portage/chromeos-base/chromeos-ec-0.0.1-r3794/work/platform/ec/common/nvmem.c:148 chromeos-ec-0.0.1-r3794: #8 build/host/nvmem/nvmem.exe(nvmem_commit+0xf5) [0x40d045] chromeos-ec-0.0.1-r3794: /build/peach_pit/tmp/portage/chromeos-base/chromeos-ec-0.0.1-r3794/work/platform/ec/common/nvmem.c:602 chromeos-ec-0.0.1-r3794: #9 build/host/nvmem/nvmem.exe() [0x412b5d] chromeos-ec-0.0.1-r3794: /build/peach_pit/tmp/portage/chromeos-base/chromeos-ec-0.0.1-r3794/work/platform/ec/test/nvmem.c:134 chromeos-ec-0.0.1-r3794: #10 build/host/nvmem/nvmem.exe() [0x413246] chromeos-ec-0.0.1-r3794: /build/peach_pit/tmp/portage/chromeos-base/chromeos-ec-0.0.1-r3794/work/platform/ec/test/nvmem.c:341 chromeos-ec-0.0.1-r3794: #11 build/host/nvmem/nvmem.exe(run_test+0x184) [0x413e58] chromeos-ec-0.0.1-r3794: /build/peach_pit/tmp/portage/chromeos-base/chromeos-ec-0.0.1-r3794/work/platform/ec/test/nvmem.c:706 chromeos-ec-0.0.1-r3794: #12 build/host/nvmem/nvmem.exe(_run_test+0x11) [0x411a4d] chromeos-ec-0.0.1-r3794: /build/peach_pit/tmp/portage/chromeos-base/chromeos-ec-0.0.1-r3794/work/platform/ec/core/host/task.c:77 chromeos-ec-0.0.1-r3794: #13 build/host/nvmem/nvmem.exe(_task_start_impl+0x79) [0x412301] chromeos-ec-0.0.1-r3794: /build/peach_pit/tmp/portage/chromeos-base/chromeos-ec-0.0.1-r3794/work/platform/ec/core/host/task.c:406 (discriminator 1) chromeos-ec-0.0.1-r3794: #14 /lib64/libpthread.so.0(+0x74b4) [0x7fb3154594b4] chromeos-ec-0.0.1-r3794: ??:0 chromeos-ec-0.0.1-r3794: #15 /lib64/libc.so.6(clone+0x6d) [0x7fb31519c69d] chromeos-ec-0.0.1-r3794: ??:0 chromeos-ec-0.0.1-r3794: chromeos-ec-0.0.1-r3794: ============================= chromeos-ec-0.0.1-r3794: make[1]: *** [Makefile.rules:213: run-nvmem] Error 1
,
Apr 26 2017
drinkcat, you're the last who touched src/platform/ec. Tag, you're it.
,
Apr 27 2017
cc-ing relevant people, from the output of `git log test/nvmem*`.
,
Apr 27 2017
nvmem_find_partition isn't a subtest. In the failure posted in the OP, the failure was in test_write_full_multi(). Do you have links to the other failures?
,
Apr 27 2017
Found the daisy_spring instance: https://uberchromegw.corp.google.com/i/chromeos/builders/daisy_spring-release/builds/2400 The test failure there was in a different subtest, test_buffer_overflow().
,
Apr 27 2017
However, in both cases, they seem to be failing on an fopen() when emulating flash on the host.[0] Not sure why... [0] - https://chromium.googlesource.com/chromiumos/platform/ec/+/59bd55c41e02eab2ac01cd3adbdda1950f7fb09e/chip/host/persistence.c#38
,
Apr 27 2017
ah, were some intermittent failures in these tests, so at one point Scott (scollyer@) removed a test from the set (I don't remember which one and can't check now) to avoid the problem. Then when adding some more tests to the set I found and fixed a bug in the harness which I thought could have been the reason for the intermittent failure. I could not reporduce the problem on my setup and then re-enabled the test. It has been a few months since, not sure if the problem just resurfaced, or has been going on unnoticed. Will have to confirm that this is the same problem though.
,
May 25 2017
,
May 25 2017
Another instance occurred on daisy_spring today: http://shortn/_ykBOJK1Bty I think there's something still lurking here. As mentioned in the other bug, I think there might be something funky with the emulated flash backing store. Oh look it's already assigned to me... vbendeb@, were you able to confirm if the issue was the same in c#7?
,
May 25 2017
This is the original bug I was talking about: https://b.corp.google.com/issues/35577053
,
May 27 2017
Just want to remind we are still seeing this. I don't have the logs in front of me but just reminding that it's still causing build failures.
,
Jun 19 2017
Don't want to bump priority because I'm not sure how the EC folks handle priorities. This is still happening and at least confusing sheriffs. If this kills another build, I'll strongly argue for immediately removing that test from the build.
,
Jun 19 2017
Issue 730397 has been merged into this issue.
,
Jun 19 2017
is there a way to reproduce a builder environment at someone's desk? The error never happens when running on the desktop - every single repo upload requires the tests to be run, and they never fail on the desktop.
,
Jun 21 2017
If you cannot reproduce with the same build on a desktop (testing on the exact boards that are failing), then you should be able to: - spawn a release trybot - log into a release builder when a canary is not running and manually try and emerge that package/test If it's the same test that is always failing, can you add extra debugging so that when it happens you have information to narrow it down.
,
Jun 21 2017
this test is not running on any board, it is running on the workstation. Is there a description of this process: "spawn a release trybot" - is it the same as cbuildbot? How do I login into it - is this described anywhere?
,
Jun 21 2017
Sorry, what I mean is that the test is failing when doing builds for certain boards. Is it failing across all boards, or just some of them? If just some of them, then ensure that you're building for those exact boards. 1. cbuildbot --remote xyz-release (or cbuildbot --local xyz-release) 2. logging in depends on the type of machine (either bare metal *-m2 or GCE instances eg wimpy/standard/beefy). https://chrome-internal.googlesource.com/infra/infra_internal/+/master/doc/ssh.md is the documentation I know of. I'd definitely try #1 first because #2 is a bit more work to get going.
,
Jul 5 2017
Here is another similar instance (attaching log so it isn't lost to log rotation): https://uberchromegw.corp.google.com/i/chromeos/builders/chell-paladin/builds/1704/steps/UnitTest/logs/stdio In this case, it looks like fclose() is deadlocked or hung. I'm not sure what could be causing that. The man page mentions that double fclose() on the same descriptor causes undefined behavior, but checking the code, it doesn't seem possible. I guess another process calling fclose() on our descriptor is within the realm of possibility. There are also some interesting interactions with 'dup' I found mentioned on stackoverflow, but I don't think those apply here.
,
Aug 16 2017
,
Aug 16 2017
I see that we have "Test nvmem timed out after 10 seconds!" in #18, and the duplicated #19 (we lost the logs for #0). In chip/host/persistence.c, we actually try to open/close a file on the filesystem (build/host/nvmem/nvmem.exe_persist_flash). Could it be that CPU+I/O on the builder is just that busy (building other things in parallel, including other EC tests), that it can't complete the test in 10 seconds?! Looking at the latest example: https://uberchromegw.corp.google.com/i/chromeos/builders/poppy-release/builds/657/steps/UnitTest/logs/stdio chromeos-ec starts around 20:02:08, and fails a little before 20:11:31. Builder stats: https://viceroy.corp.google.com/chrome_infra/Machines/per_machine?duration=6659s&hostname=cros-beefy386-c2&refresh=-1&utc_end=1502856659#_VG_XL6WGsQd CPU is almost at 100% during that time (but I/O is not at peak).
,
Aug 16 2017
you mean the thread running open/close does not get to run, but the thread limiting execution time still runs?
,
Aug 16 2017
Right, it might make more sense if this was an I/O issue (so open/close are very slow, but the timeout thread manages to run)...
,
Aug 17 2017
I don't know how EC unit tests are executed, but wouldn't a timeout thread just sleep most of the time, without needing much CPU time, while the actual test would need a lot of CPU time?
,
Nov 29 2017
Happened again here: https://luci-milo.appspot.com/buildbot/chromeos/scarlet-paladin/2596 chromeos-ec-0.0.1-r4384: Test nvmem timed out after 10 seconds!
,
Dec 22 2017
This happened twice in the past 2 days: This is the 2nd one: https://luci-milo.appspot.com/buildbot/chromeos/oak-paladin/9580
,
Jan 8 2018
,
Jan 8 2018
There is no crash contrary to the title of this bug (the backtrace is just the 'host' EC being shutdown by a signal after the timeout). in #25 This timeout is due to the fact that we are not able to parse properly the output despite the test running successfully. We are getting: " chromeos-ec-0.0.1-r4450: Testing short message (8 bytes) chromeos-ec-0.0.1-r4450: chromeos-ec-0.0.1-r4450: Testing short message (72 bytes) chromeos-ec-0.0.1-r4450: chromeos-ec-0.0.1-r4450: Testing long message (2888 bytes) chromeos-ec-0.0.1-r4450: chromeos-ec-0.0.1-r4450: HMAC: Testing short key chromeos-ec-0.0.1-r4450: chromeos-ec-0.0.1-r4450: HMAC: Testing medium key chromeos-ec-0.0.1-r4450: chromeos-ec-0.0.1-r4450: PassConsole input initialized chromeos-ec-0.0.1-r4450: ! chromeos-ec-0.0.1-r4450: " I guess we never get the 'Pass!' because 'Console input initialized' is inserted in the trace at that point. Maybe as a fix, we should wait for everything to be initialized including the console before running the test.
,
Jan 8 2018
oh, wow, this is a great breakthrough here. Vincent, why do you think this happens only on the builders? Where would we wait for init completed, and would increase time it takes to run tests? Maybe we should just look for Pass as part of the output string, even if it is interleaved with other stuff?
,
Jan 8 2018
uart_init() is called before task scheduling starts, but take a closer look:
void uart_init(void)
{
pthread_create(&input_thread, NULL, uart_monitor_stdin, NULL);
stopped = 1; /* Not transmitting yet */
init_done = 1;
}
(uart_monitor_stdin() calls printf("Console input initialized\n"), then reads console input).
If I understand this correctly, the scheduling of input_thread is entirely the decision of the host OS, the cros_ec task scheduler has no input on the matter. So, input_thread can preempt a printf() while it's writing chars to the output buffer, even though our test routine is the highest priority task.
I suggest to either remove the 'Console input initialized' print, or move it to uart_init(), so the print happens deterministically. Another option is to call pthread_yield() or similar after the pthread_create() so uart_monitor_stdin() gets a chance to run before we kick off task scheduling.
,
Jan 8 2018
BTW, I think some of the previous reported failures (eg. #18) cannot be explained by this issue. Thinking about it more, it may be best to call pthread_yield() or similar after the pthread_create() in order to standardize the behavior and not be at the mercy of the OS scheduler, in case other races appear someday.
,
Jan 9 2018
I agree #18 is unlikely to be the uart issue, but that test was running in a VM, having a file I/O taking more than 10s is not totally uncommon on a VM, probably this is just it.
,
Jan 9 2018
Re. file I/O being slow on VM, can we detect that we're running on a VM or buildbot somehow and then bump the timeout in util/run_host_test if so? We could unconditionally bump this timeout, but it would be nice to fail in a reasonable time on developer workstations, if some code error does cause us to enter an infinite loop.
,
Jan 10 2018
Re #32 wrt detecting VM, we can invent all sorts of questionable tricks but they regularly create new creative systems to run builds... maybe we can retry (once or a low number of times) the test in run_host_test if we detect that the process was in 'D' state (uninterruptible sleep, so somewhat likely blocked on an I/O) when we kill it in the timeout condition. This should not be triggered too often if the task code is just looping.
,
Jan 10 2018
On our 'beefy' build VMs, the crazy I/O latency should be a rare event ... not sure it worth adding more complexity, but if we want to test something along the line of #33, we can start adding a debug trace to check whether by hypothesis about the process state is true. Would be something such as : https://chromium-review.googlesource.com/#/c/chromiumos/platform/ec/+/859771
,
Jan 11 2018
The following revision refers to this bug: https://chromium.googlesource.com/chromiumos/platform/ec/+/7bc128f7d1e7e6a59ed47cbb8ee9e944f17dc0b6 commit 7bc128f7d1e7e6a59ed47cbb8ee9e944f17dc0b6 Author: Shawn Nematbakhsh <shawnn@chromium.org> Date: Thu Jan 11 02:20:01 2018 chip/host: uart: Run uart_monitor_stdin() before task scheduling After a call to pthread_create(), it is indeterminate which thread the caller or the new thread will next execute. Synchronize with the new thread and allow it to initialize (and print to console, before the print can potentially interfere with other prints) before proceeding. BUG= chromium:715011 BRANCH=None TEST=Run 'make runtests', verify 'Console input initialized' is seen before '--- Emulator initialized after reboot ---': ====== Emulator output ====== No flash storage found. Initializing to 0xff. No RAM data found. Initializing to 0x00. Console input initialized --- Emulator initialized after reboot --- Signed-off-by: Shawn Nematbakhsh <shawnn@chromium.org> Change-Id: Ieb622e9b7eea2d11d4a11a98bb503a44534f676c Reviewed-on: https://chromium-review.googlesource.com/854989 Commit-Ready: Shawn N <shawnn@chromium.org> Tested-by: Shawn N <shawnn@chromium.org> Reviewed-by: Vincent Palatin <vpalatin@chromium.org> [modify] https://crrev.com/7bc128f7d1e7e6a59ed47cbb8ee9e944f17dc0b6/chip/host/uart.c
,
Jan 15 2018
Another one here (so #35 did not fix everything, as expected): https://logs.chromium.org/v/?s=chromeos%2Fbb%2Fchromeos%2Fpoppy-release%2F1109%2F%2B%2Frecipes%2Fsteps%2FUnitTest%2F0%2Fstdout
,
Jan 15 2018
Looks similar to #18, might worth submitting the CL linked in #34
,
Jan 18 2018
The following revision refers to this bug: https://chromium.googlesource.com/chromiumos/platform/ec/+/07230f772e90650eb8dbac444ebe629272306049 commit 07230f772e90650eb8dbac444ebe629272306049 Author: Vincent Palatin <vpalatin@chromium.org> Date: Thu Jan 18 13:09:42 2018 Add more debugging to run_host_test If a host test fails, record the execution state of the EC host process. This is an attempt to provide a hint whether the test was blocked on I/O, if so it might be in 'D' state (but it might have recovered too late too). Signed-off-by: Vincent Palatin <vpalatin@chromium.org> BRANCH=none BUG= chromium:715011 TEST=run 'make runtests -j', then run it again with TIMEOUT reduced to 1 see it fail on 'kb_scan' and 'interrupt' tests with the trace containing '*** test [...] in state Ssl+ ***' Change-Id: I4590a4b84a2aba8d385d3ef911d5d0186e8ce2e3 Reviewed-on: https://chromium-review.googlesource.com/859771 Commit-Ready: Vincent Palatin <vpalatin@chromium.org> Tested-by: Vincent Palatin <vpalatin@chromium.org> Reviewed-by: Randall Spangler <rspangler@chromium.org> [modify] https://crrev.com/07230f772e90650eb8dbac444ebe629272306049/util/run_host_test
,
Jan 30 2018
last one: https://luci-milo.appspot.com/buildbot/chromeos/kevin-paladin/3689 as it has the trace, the process was in 'Ssl+' state, but still likely blocked in the I/O in interruptible sleep
,
Jan 30 2018
the problematic file for the I/O is the persistent flash storage: e.g. src/platform/ec/build/host/nvmem/nvmem.exe_persist_flash Maybe, on those builders, we can store it outside the build directory hierarchy in some lower worst-latency storage (eg ramfs) In those VM based builders, is the /tmp mounted on tmpfs ? Else for the flash case, I'm not even fully sure that we actually need the persistence rather than just allocating memory for the life of the process, do we ?
,
Jan 30 2018
An idea along the line of #40 using the /dev/shm to store the persistent storage files in a RAM backed filesystem: https://chromium-review.googlesource.com/#/c/chromiumos/platform/ec/+/893380 test: store persistence files in RAM The random guess is that a tmpfs filesystem should have a lower worst-case latency than the regular filesystem where the build is on-going in parallel. It's true on my mostly-idle workstation but it might be or not on the VM-based builder.
,
Jan 31 2018
The following revision refers to this bug: https://chromium.googlesource.com/chromiumos/platform/ec/+/924d21d904b9f2c640ee5b0ccabcf78200456a0f commit 924d21d904b9f2c640ee5b0ccabcf78200456a0f Author: Vincent Palatin <vpalatin@chromium.org> Date: Wed Jan 31 13:58:03 2018 test: store persistence files in RAM On VM-based builders, the nvmem unittest was sometimes missing the 10-second deadline, likely being stuck in slow I/Os. Try to move the persistent storage files used for flash 'emulation' on host from the build directory to a RAM-backed filesystem in /dev/shm in order to mitigate this bottleneck. Store the new backing files in a path like: /dev/shm/EC_persist__mnt_host_source_src_platform_ec_build_host_nvmem_nvmem.exe_flash in order to keep the properties of the old system: subsequent runs of the same build will use the same persistent storage but 2 different trees won't mix up. Signed-off-by: Vincent Palatin <vpalatin@chromium.org> BRANCH=none BUG= chromium:715011 TEST=make runtests TEST=run the following command with and without this change: 'for i in 0 1 2 3 4 5 6 7 8 9 ; do time make run-nvmem ; done' and see the average test time around 500 ms without the change and around 320 ms with it on an idle and beefy workstation. Change-Id: Ic2ff6511b81869171efc484ca805f8c0d6008595 Reviewed-on: https://chromium-review.googlesource.com/893380 Commit-Ready: Vincent Palatin <vpalatin@chromium.org> Tested-by: Vincent Palatin <vpalatin@chromium.org> Reviewed-by: Vincent Palatin <vpalatin@chromium.org> [modify] https://crrev.com/924d21d904b9f2c640ee5b0ccabcf78200456a0f/chip/host/persistence.c
,
Jan 31 2018
if somebody sees again the timeout failure on the nvmem unittest with the patch above, please report here.
,
Feb 20 2018
This happened on the R65 release branch. https://logs.chromium.org/v/?s=chromeos%2Fbb%2Fchromeos_release%2Fcaroline-release_release-R65-10323.B%2F33%2F%2B%2Frecipes%2Fsteps%2FUnitTest%2F0%2Fstdout Maybe worth merging back? If you agree feel free to merge.
,
Feb 21 2018
I haven't heard negative things on ToT since then ie it likely does not fail more, but the issue happening only in case of terrible builders performance, new cases might not have be reported here either. We can definitely try on R65 and close this.
,
Feb 21 2018
The following revision refers to this bug: https://chromium.googlesource.com/chromiumos/platform/ec/+/de18dc5ab697e391a5318ec1620dadf3d56a7440 commit de18dc5ab697e391a5318ec1620dadf3d56a7440 Author: Vincent Palatin <vpalatin@chromium.org> Date: Wed Feb 21 06:44:12 2018 test: store persistence files in RAM On VM-based builders, the nvmem unittest was sometimes missing the 10-second deadline, likely being stuck in slow I/Os. Try to move the persistent storage files used for flash 'emulation' on host from the build directory to a RAM-backed filesystem in /dev/shm in order to mitigate this bottleneck. Store the new backing files in a path like: /dev/shm/EC_persist__mnt_host_source_src_platform_ec_build_host_nvmem_nvmem.exe_flash in order to keep the properties of the old system: subsequent runs of the same build will use the same persistent storage but 2 different trees won't mix up. Signed-off-by: Vincent Palatin <vpalatin@chromium.org> BRANCH=none BUG= chromium:715011 TEST=make runtests TEST=run the following command with and without this change: 'for i in 0 1 2 3 4 5 6 7 8 9 ; do time make run-nvmem ; done' and see the average test time around 500 ms without the change and around 320 ms with it on an idle and beefy workstation. Change-Id: Ic2ff6511b81869171efc484ca805f8c0d6008595 Reviewed-on: https://chromium-review.googlesource.com/893380 Commit-Ready: Vincent Palatin <vpalatin@chromium.org> Tested-by: Vincent Palatin <vpalatin@chromium.org> Reviewed-by: Vincent Palatin <vpalatin@chromium.org> (cherry picked from commit 924d21d904b9f2c640ee5b0ccabcf78200456a0f) Reviewed-on: https://chromium-review.googlesource.com/928121 Commit-Queue: Vincent Palatin <vpalatin@chromium.org> Trybot-Ready: Vincent Palatin <vpalatin@chromium.org> [modify] https://crrev.com/de18dc5ab697e391a5318ec1620dadf3d56a7440/chip/host/persistence.c
,
Feb 21 2018
Please re-open if you see it on a builder (and put a link to the build) |
|||||||||||
►
Sign in to add a comment |
|||||||||||
Comment 1 by chadversary@chromium.org
, Apr 26 2017