"maps_pixel_test (with patch)" recently became flaky on Mac, Windows, Linux and Android |
|||||||||||||||||||||||||||||||
Issue description"maps_pixel_test (with patch)" is flaky. This issue was created automatically by the chromium-try-flakes app. Please find the right owner to fix the respective test/step and assign this issue to them. If the step/test is infrastructure-related, please add Infra-Troopers label and change issue status to Untriaged. When done, please remove the issue from Sheriff Bug Queue by removing the Sheriff-Chromium label. We have detected 3 recent flakes. List of all flakes can be found at https://chromium-try-flakes.appspot.com/all_flake_occurrences?key=ahVzfmNocm9taXVtLXRyeS1mbGFrZXNyJwsSBUZsYWtlIhxtYXBzX3BpeGVsX3Rlc3QgKHdpdGggcGF0Y2gpDA. This flaky test/step was previously tracked in issue 624577 .
,
Jul 22 2016
Nothing got fixed. Reopening.
,
Jul 22 2016
Issue 630292 has been merged into this issue.
,
Jul 22 2016
Assigning to some people who work in the relevant directory. Revising tags as well.
,
Jul 22 2016
BTW, it always seems to be maps_004 that fails. Here's an example failure: ... (INFO) 2016-07-20 12:25:23,422 browser.DumpStateUponFailure:358 ***************** END OF BROWSER LOG ****************** (WARNING) 2016-07-20 12:25:23,422 shared_page_state.DumpStateUponFailure:144 Taking screenshots upon failures disabled. Maps' devicePixelRatio is 2 See http://chromium-browser-gpu-tests.commondatastorage.googleapis.com/view_test_results.html?d98d1d65536b600b9a6c0a87cbe6bbd7ea523a94_mac_chromium_rel_ng_telemetry for this run's test results Traceback (most recent call last): File "/b/swarm_slave/w/irdRH9Ys/third_party/catapult/telemetry/telemetry/internal/story_runner.py", line 85, in _RunStoryAndProcessErrorIfNeeded state.RunStory(results) File "/b/swarm_slave/w/irdRH9Ys/content/test/gpu/gpu_tests/gpu_test_base.py", line 111, in RunStory RunStoryWithRetries(GpuSharedPageState, self, results) File "/b/swarm_slave/w/irdRH9Ys/content/test/gpu/gpu_tests/gpu_test_base.py", line 72, in RunStoryWithRetries super(cls, shared_page_state).RunStory(results) File "/b/swarm_slave/w/irdRH9Ys/third_party/catapult/telemetry/telemetry/page/shared_page_state.py", line 321, in RunStory self._current_page, self._current_tab, results) File "/b/swarm_slave/w/irdRH9Ys/content/test/gpu/gpu_tests/maps.py", line 57, in ValidateAndMeasurePage tab, page.display_name, screenshot, expected, dpr) File "/b/swarm_slave/w/irdRH9Ys/content/test/gpu/gpu_tests/cloud_storage_test_base.py", line 257, in _ValidateScreenshotSamples self.options.test_machine_name) File "/b/swarm_slave/w/irdRH9Ys/content/test/gpu/gpu_tests/cloud_storage_test_base.py", line 88, in _CompareScreenshotSamples str(actual_color.b) + "]") Failure: Expected pixel at [590, 390] (actual pixel (1180, 780)) to be [145, 188, 255] but got [233, 229, 220] [ FAILED ] Maps.maps_004 (17147 ms) [18427:1299:0720/122523:WARNING:url_request_context_getter.cc(43)] URLRequestContextGetter leaking due to no owning thread. [ PASSED ] 0 tests. [ FAILED ] 1 test, listed below: [ FAILED ] Maps.maps_004
,
Aug 2 2016
The logs contain a screenshot when the test fails. Here are a couple: http://chromium-browser-gpu-tests.commondatastorage.googleapis.com/view_test_results.html?7b8a9fb57c2ba79bec6b255229ff519d1a2de091_linux_chromium_rel_ng_telemetry http://chromium-browser-gpu-tests.commondatastorage.googleapis.com/view_test_results.html?a3254dbccb03b0de8ac2b56b456732400cbc74be_mac_chromium_rel_ng_telemetry It looks like occasionally none of the map tiles load. This happened twice in the past several days. CC'ing the loader folks. There's also a GPU process crash randomly on Linux: Received signal 11 SEGV_MAPERR fffffffc014ca464 (INFO) 2016-07-22 16:16:39,063 cache_temperature.EnsurePageCacheTemperature:55 PageCacheTemperature: any #0 0x7fd79dcf3b37 base::debug::(anonymous namespace)::StackDumpSignalHandler() #1 0x7fd79a6bb340 <unknown> #2 0x7fd78cbe9f17 <unknown> #3 0x7fd78c859e24 <unknown> #4 0x7fd78c861c9c <unknown> #5 0x7fd78c861ec2 <unknown> #6 0x7fd79b83a127 gpu::gles2::TextureManager::SetParameteri() #7 0x7fd79b7c317e gpu::gles2::GLES2DecoderImpl::HandleTexParameteri() #8 0x7fd79b7db508 gpu::gles2::GLES2DecoderImpl::DoCommandsImpl<>() #9 0x7fd79b794ed6 gpu::CommandParser::ProcessCommands() #10 0x7fd79b795d05 gpu::CommandExecutor::PutChanged() #11 0x7fd79b877ae9 gpu::GpuCommandBufferStub::OnAsyncFlush() #12 0x7fd79b877882 _ZN3IPC8MessageTI35GpuCommandBufferMsg_AsyncFlush_MetaSt5tupleIJijSt6vectorIN2ui11LatencyInfoESaIS5_EEEEvE8DispatchIN3gpu20GpuCommandBufferStubESC_vMSC_FvijRKS7_EEEbPKNS_7MessageEPT_PT0_PT1_T2_ #13 0x7fd79b875dbb gpu::GpuCommandBufferStub::OnMessageReceived() #14 0x7fd79ec7c3e7 IPC::MessageRouter::RouteMessage() #15 0x7fd79b871151 gpu::GpuChannel::HandleMessageHelper() #16 0x7fd79b870f04 gpu::GpuChannel::HandleMessage() #17 0x7fd79dd82699 base::debug::TaskAnnotator::RunTask() #18 0x7fd79dd12b85 base::MessageLoop::RunTask() #19 0x7fd79dd12e98 base::MessageLoop::DeferOrRunPendingTask() #20 0x7fd79dd131db base::MessageLoop::DoWork() #21 0x7fd79dd14f4a base::(anonymous namespace)::WorkSourceDispatch() #22 0x7fd799629e04 g_main_context_dispatch #23 0x7fd79962a048 <unknown> #24 0x7fd79962a0ec g_main_context_iteration #25 0x7fd79dd14d16 base::MessagePumpGlib::Run() #26 0x7fd79dd12681 base::MessageLoop::RunHandler() #27 0x7fd79dd35970 base::RunLoop::Run() #28 0x7fd79d81de96 content::GpuMain() #29 0x7fd79d8a6a5b content::RunNamedProcessTypeMain() #30 0x7fd79d8a74d3 content::ContentMainRunnerImpl::Run() #31 0x7fd79d8a5da0 content::ContentMain() #32 0x7fd79b61e27b ChromeMain #33 0x7fd79453bec5 __libc_start_main #34 0x7fd79b61e14d <unknown> r8: 0000000000000000 r9: 0000000000000001 r10: 00000000ffffffff r11: ffffffffffffffff r12: 0000000000000000 r13: fffffffc014324ec r14: 000011045e4f9b40 r15: 0000000000000004 di: 000011045e4f9be8 si: 0000000000000001 bp: 0000000000000000 bx: 0000000000000000 dx: 0000000000000000 ax: fffffffc014ea1d4 cx: 00007fd78d84c680 sp: 00007ffc7aa407f0 ip: 00007fd78cbe9f17 efl: 0000000000010246 cgf: 0000000000000033 erf: 0000000000000005 trp: 000000000000000e msk: 0000000000000000 cr2: fffffffc014ca464 [end of stack trace] I don't know whether we want to try investigating the latter. I don't have any ideas for how to proceed with these. Victor, if you think you could find someone from the GPU team to investigate the latter crash, please do so -- otherwise let's close this as WontFix. (The flakiness rate is already very low -- on the order of once a day -- so this isn't as high priority an issue as some others.)
,
Aug 4 2016
,
Aug 10 2016
Issue 636174 has been merged into this issue.
,
Nov 9 2016
Similar flaking is recently seen on Win 7 buildbot, sample build: https://build.chromium.org/p/chromium.gpu/builders/Win7%20Release%20%28NVIDIA%29/builds/58809 (log attached). Could someone please disable this test?
,
Nov 9 2016
It also flakes on Mac (log attached): https://build.chromium.org/p/chromium.gpu/builders/Mac%2010.10%20Debug%20%28Intel%29/builds/19428 https://codereview.chromium.org/2487973002 is in flight disabling the test on the recently flaking bots.
,
Nov 9 2016
The following revision refers to this bug: https://chromium.googlesource.com/chromium/src.git/+/8cf7cb095499c4627228940c2ab0d07aba839d4f commit 8cf7cb095499c4627228940c2ab0d07aba839d4f Author: vabr <vabr@chromium.org> Date: Wed Nov 09 15:24:13 2016 Disable maps_pixel_test on Mac and Win The test is flaky on Mac 10.10 Debug(Intel) and Win7 Release (NVIDIA). BUG= 626986 TBR=vmiura@chromium.org,kbr@chromium.org CQ_INCLUDE_TRYBOTS=master.tryserver.chromium.linux:linux_optional_gpu_tests_rel;master.tryserver.chromium.mac:mac_optional_gpu_tests_rel;master.tryserver.chromium.win:win_optional_gpu_tests_rel;master.tryserver.chromium.android:android_optional_gpu_tests_rel Review-Url: https://codereview.chromium.org/2487973002 Cr-Commit-Position: refs/heads/master@{#430934} [modify] https://crrev.com/8cf7cb095499c4627228940c2ab0d07aba839d4f/content/test/gpu/generate_buildbot_json.py
,
Nov 9 2016
Reassigning to jbauman@ as current GPU sheriff and adding the builders to the description. On the mac, it looks like this was the first failure on the waterfall, yesterday: https://build.chromium.org/p/chromium.gpu/builders/Mac%2010.10%20Debug%20%28Intel%29/builds/19394 and it has failed six out of the last fifty builds ...
,
Nov 9 2016
This is a serious regression. I'm taking this bug and raising to P0. Trying to come up with a regression range.
,
Nov 9 2016
,
Nov 9 2016
Note that this is flaking on many machines, including but not limited to: https://build.chromium.org/p/chromium.gpu/waterfall?builder=Mac%2010.10%20Debug%20(Intel) https://build.chromium.org/p/chromium.gpu/waterfall?builder=Mac%2010.10%20Retina%20Release%20(AMD) https://build.chromium.org/p/chromium.gpu.fyi/builders/Win7%20Release%20%28NVIDIA%29?numbuilds=200
,
Nov 9 2016
,
Nov 9 2016
The following revision refers to this bug: https://chromium.googlesource.com/chromium/src.git/+/ed9e7b5e2ed9df6f02ea4638523b2da65a59a8ba commit ed9e7b5e2ed9df6f02ea4638523b2da65a59a8ba Author: kbr <kbr@chromium.org> Date: Wed Nov 09 18:28:29 2016 Temporarily mark Maps.maps_004 flaky on Mac and Win. Revert commit 8cf7cb095499c4627228940c2ab0d07aba839d4f, which was not the correct way of performing this disable. BUG= 626986 CQ_INCLUDE_TRYBOTS=master.tryserver.chromium.linux:linux_optional_gpu_tests_rel;master.tryserver.chromium.mac:mac_optional_gpu_tests_rel;master.tryserver.chromium.win:win_optional_gpu_tests_rel;master.tryserver.chromium.android:android_optional_gpu_tests_rel TBR=vmiura@chromium.org,zmo@chromium.org NOTRY=true Review-Url: https://codereview.chromium.org/2487093003 Cr-Commit-Position: refs/heads/master@{#430984} [modify] https://crrev.com/ed9e7b5e2ed9df6f02ea4638523b2da65a59a8ba/content/test/gpu/generate_buildbot_json.py [modify] https://crrev.com/ed9e7b5e2ed9df6f02ea4638523b2da65a59a8ba/content/test/gpu/gpu_tests/maps_expectations.py
,
Nov 9 2016
It's not just Mac and Win; it's failed on Linux too: https://build.chromium.org/p/chromium.gpu/builders/Linux%20Release%20%28NVIDIA%29/builds/90089 http://chromium-browser-gpu-tests.commondatastorage.googleapis.com/view_test_results.html?6f34e7c74ea016495fe3772da1a33062ffce3221_Linux_Release_NVIDIA__telemetry https://build.chromium.org/p/chromium.gpu/builders/Linux%20Release%20%28NVIDIA%29/builds/90084 http://chromium-browser-gpu-tests.commondatastorage.googleapis.com/view_test_results.html?3566c03a7c2dd1bbd05ae396fbc414bf396a71da_Linux_Release_NVIDIA__telemetry
,
Nov 9 2016
The following revision refers to this bug: https://chromium.googlesource.com/chromium/src.git/+/0c1f6c80f53c47284e459552538e64baff9d5eaa commit 0c1f6c80f53c47284e459552538e64baff9d5eaa Author: kbr <kbr@chromium.org> Date: Wed Nov 09 18:50:36 2016 Mark Maps.maps_004 flaky on all platforms; flakiness seen on Linux too. BUG= 626986 CQ_INCLUDE_TRYBOTS=master.tryserver.chromium.linux:linux_optional_gpu_tests_rel;master.tryserver.chromium.mac:mac_optional_gpu_tests_rel;master.tryserver.chromium.win:win_optional_gpu_tests_rel;master.tryserver.chromium.android:android_optional_gpu_tests_rel TBR=zmo@chromium.org,vmiura@chromium.org NOTRY=true Review-Url: https://codereview.chromium.org/2484393003 Cr-Commit-Position: refs/heads/master@{#430992} [modify] https://crrev.com/0c1f6c80f53c47284e459552538e64baff9d5eaa/content/test/gpu/gpu_tests/maps_expectations.py
,
Nov 9 2016
Based on the logs from this bot: https://build.chromium.org/p/chromium.gpu/builders/Mac%2010.10%20Retina%20Release%20%28AMD%29?numbuilds=200 Here's a plausibly good build: https://build.chromium.org/p/chromium.gpu/builders/Mac%2010.10%20Retina%20Release%20%28AMD%29/builds/21428 and the first bad build: https://build.chromium.org/p/chromium.gpu/builders/Mac%2010.10%20Retina%20Release%20%28AMD%29/builds/21459 Resulting in this changelog: http://crrev.com/2d06fbcba298107e6f1ebad27af7b18995e389e4..b147fb4c729afc68a89e3595b09b6ac691776970
,
Nov 9 2016
Hi Ken, I think we can narrow the range a little bit more. On this bot: https://build.chromium.org/p/chromium.gpu/builders/Mac%2010.10%20Debug%20%28Intel%29?numbuilds=200 The first bad build is: https://build.chromium.org/p/chromium.gpu/builders/Mac%2010.10%20Debug%20%28Intel%29/builds/19394 where the commit position is: #430837, then the list becomes slightly shorter: https://chromium.googlesource.com/chromium/src/+log/2d06fbcba298107e6f1ebad27af7b18995e389e4..a081fcfbc7471a6681142d49491b82f7b8402851 Not sure if that makes sense.
,
Nov 9 2016
Excellent - thanks Xida.
,
Nov 9 2016
,
Nov 9 2016
Looking through that ChangeLog I'm not seeing any obvious plausible suspects. Both of the following seem really unlikely: ANGLE roll (unlikely, would have been reliably broken) https://chromium.googlesource.com/angle/angle.git/+log/e5c53e3..b7bf742 AcceleratedStaticBitmapImage refactoring (unlikely, since loaded HTMLImageElements don't use this): https://chromium.googlesource.com/chromium/src/+/5717f6e18242d06a01cab340f086624b8f55cad6 I'm downgrading to P1 since the flakiness suppressions should handle the immediate failures on the commit queue. More eyeballs on this problem welcome. Maybe the regression range needs to be extended to take into account earlier revisions.
,
Nov 9 2016
I'm currently working on bisecting this with --times=10. We'll see what happens.
,
Nov 9 2016
Thanks John for helping track this down. Let me assign it to you while you're investigating.
,
Nov 9 2016
For my first attempt I got https://chromium.googlesource.com/chromium/src/+log/e37736a37364199ffddbbb3684d38f84da9a1d47..dd24d1cb3717941200e13f7796b0a359b16a0c00 , which doesn't make sense, so I'm going to try again. The command I ran was tools/bisect-builds.py -b 430837 -g 430000 --verify-range -c "C:/python_27_amd64/files/python.exe D:/src/chrome/src/content/test/gpu/run_gpu_test.py maps --browser=exact --browser-executable=%p" - a win --use-local-cache -t10 --not-interactive
,
Nov 9 2016
Thanks John for trying. When you run locally with ToT do you see a failure reliably within 10 runs?
,
Nov 9 2016
It's taken forever for me to sync to and build ToT, so I haven't tried that yet. --verify-range on bisect-builds hasn't complained about an incorrect range and I've used it twice so far, so at least the failure is relatively reliable.
,
Nov 9 2016
Aha, I see -- awesome. Thank you.
,
Nov 10 2016
Also Android: https://build.chromium.org/p/chromium.gpu.fyi/builders/Android%20Release%20%28Nexus%205X%29/builds/3813 https://build.chromium.org/p/chromium.gpu.fyi/builders/Android%20Release%20%28Nexus%205X%29/builds/3817 https://build.chromium.org/p/chromium.gpu.fyi/builders/Android%20Release%20%28Nexus%206P%29/builds/3120 https://build.chromium.org/p/chromium.gpu.fyi/builders/Android%20Release%20%28Nexus%206P%29/builds/3125
,
Nov 10 2016
I'm doing a manual bisect with a slightly more reliable reproducer and got https://chromium.googlesource.com/chromium/src/+log/b8cb16f79a95c8bc639a9c58785a8484c197aca9..0b5fe82535de96d22d0bd32704e0b3b57f6a07b7 so it may be caused by r430579. I'm going to keep re-bisecting it so I can be sure, though.
,
Nov 10 2016
,
Nov 10 2016
Note for interested parties that running these tests locally is documented here: https://www.chromium.org/developers/testing/gpu-testing#TOC-Running-the-GPU-Tests-Locally so to do a single run with the checked-in code: ./content/test/gpu/run_gpu_test.py maps --browser=release
,
Nov 10 2016
I'm still working on this, but I can't seem to get a reliable bisect, even with the more reliable reproduction case. I've been trying on official builds recently, and it seems like the issue there doesn't happen until somewhat later. I've had it happen on r430740, but only about 1 in 40 times, whereas on r430800 it happens about 1 in 3 times. The approximate range is https://chromium.googlesource.com/chromium/src/+log/3e652a8cbbe2991e34e65cbb7635bc3d89db19f4..f5225f68f44886f5d020d9e92a3dba6fac23a1ab, particularly later changes, but I can't see a way to narrow it down.
,
Nov 10 2016
Thanks for continuing to debug this John. Looking through the changelog for likely candidates. https://chromium.googlesource.com/chromium/src/+/fddb451107a7468066fd17996e1920d1ee3289a4 Could this have affected it? Could image decodes be getting dropped randomly? Is this code used when loading HTML images in the main frame today, or just out-of-process iframes? https://chromium.googlesource.com/chromium/src/+/06ed314bf08787f9aa178b02d339f6993f3c39c9 Could the Skia fix for invalid JPEGs have affected things? https://chromium.googlesource.com/chromium/src/+/6eb90e1d7659979b114398ea40834ac7d9a34ff4 (cavalcantii@, we spoke offline but it looks like the previous candidate change probably isn't it) Seems unlikely, but this is image-related? https://chromium.googlesource.com/chromium/src/+/b6952a45495c35c6c0eb2c8ae3abf93058d1cb58 Could changes to V8's GC be inadvertently GC'ing images while they're pending load? This has been a persistent problem for which we never produced a reliable stress test. https://chromium.googlesource.com/chromium/src/+/ed92879a882400e275e355afcb26beda245ea548 https://chromium.googlesource.com/chromium/src/+/552106fc740055f6738945769752e550a5a74956 Any possibility that the V8 compiler changes could have introduced an intermittent bug? https://chromium.googlesource.com/chromium/src/+/0b5fe82535de96d22d0bd32704e0b3b57f6a07b7 Seems like a likely candidate assuming it's still within the regression range.
,
Nov 10 2016
I might have to give up for now, as I'm not making much progress. I did some testing on linux, and it seems like it's most likely some DEPS roll (based on the fact the I need to do gclient sync to affect it) and the probability of the bug happening increases between r430750 and r430775.
,
Nov 10 2016
kbr: The rev mentioned for v8 GCs (b6952a45495c35c6c0eb2c8ae3abf93058d1cb58) does not include any semantical GC changes. The cleanups do not change any behavior, and as far as I see don't even change timing.
,
Nov 10 2016
0b5fe82535de96d2 is likely unrelated. TaskHandle is a new API, and there was no user at that time.
,
Nov 10 2016
Maybe you can use http://rr-project.org for replaying the failure.
,
Nov 10 2016
@Kenneth
I tested locally by 'reverting' the border-image-repeat patch (by simply forcing 'space' to become 'repeat' as it was before) and the test seems to continue failing.
Please see below (not sure if I'm running the test in the right way, I'm assuming that content shell is enough?):
adenilson@ux:~/chromium/src$ git diff
diff --git a/third_party/WebKit/Source/platform/graphics/Image.cpp b/third_party/WebKit/Source/pla
index 8016f7a..37894eb 100644
--- a/third_party/WebKit/Source/platform/graphics/Image.cpp
+++ b/third_party/WebKit/Source/platform/graphics/Image.cpp
@@ -131,6 +131,11 @@ void Image::drawTiled(GraphicsContext& ctxt,
TileRule hRule,
TileRule vRule,
SkBlendMode op) {
+ if (hRule == SpaceTile)
+ hRule = RepeatTile;
+ if (vRule == SpaceTile)
+ vRule = RepeatTile;
+
// TODO(cavalcantii): see crbug.com/662513.
FloatSize tileScaleFactor = providedTileScaleFactor;
if (vRule == RoundTile) {
adenilson@ux:~/chromium/src$ ninja -C out/Release content_browsertests -j4
ninja: Entering directory `out/Release'
[2/21] SOLINK ./libblink_platform.so
adenilson@ux:~/chromium/src$ ./content/test/gpu/run_gpu_test.py maps --browser=content-shell-release
(WARNING) 2016-11-10 10:07:52,751 desktop_browser_finder.FindAllAvailableBrowsers:147 Chrome build location for linux_x86_64 not found. Browser will be run without Flash.
(WARNING) 2016-11-10 10:07:55,811 desktop_browser_finder.FindAllAvailableBrowsers:147 Chrome build location for linux_x86_64 not found. Browser will be run without Flash.
[ RUN ] Maps.maps_004
FLAKY TEST FAILURE, retrying: Maps.maps_004
FLAKY TEST FAILURE, retrying: Maps.maps_004
Questions:
a) What is the expected behavior?
b) What exactly is this test doing?
c) Where I could find more information concerning this gpu tests?
Please let me know if is there anything else that I could help.
,
Nov 11 2016
I'm investigating this from the perspective of the test now. I've added a bunch of logging and I'm trying to narrow down what's going wrong. I'll report back tomorrow with more info.
,
Nov 11 2016
(Removing myself, because I am no longer a sheriff and have no bandwidth to contribute here.)
,
Nov 11 2016
rsturgell@ mentioned yesterday that this was looking like a compiler bug, and jbauman@ mentioned there were some TurboFan changes in the regression range. hpayer@ suggested running Canary with: --js-flags=--noturbo and the problem disappears. Benedikt, can you take responsibility for getting this bug investigated? The test case is here (Google employees only): https://drive.google.com/a/google.com/file/d/0B5DES7PYkZBLLW1oQTN4ZGVJNTg/view?usp=sharing Extract the archive, run "python -m SimpleHTTPServer" in the directory, and navigate to localhost:8000/performance.html . If the map tiles all appear then there is no bug. If most or all of the map tiles are missing, the bug has reproduced.
,
Nov 11 2016
,
Nov 11 2016
I'm slowly convincing myself this is a v8 bug. I added a bunch logging to figure out where we go off the rails, and narrowed it down to a certain property having an unexpected value. I was logging all the places we set it and it's somehow not captured. In order to find the unexpected place setting the property, I turned it into a __defineGetter__/Setter__. But this change "fixes" the test (all data successfully loads / renders). I'm guessing this affects v8's ability to apply certain optimizations.
,
Nov 11 2016
Jaro, can you take a look at this one please?
,
Nov 14 2016
Jaro: I have a d8-runnable repro here: https://drive.google.com/a/google.com/file/d/0B_9Rl1deLXKzWFBnQUQ3UWZQdFU/view?usp=sharing This is a version of the maps performance test with mocked dom and webgl. I added a call to exit(123) in the place in the code where I believe we have an incorrect logic flow. Expand that zip and run: d8 run_basic_performance.js; echo $? If it runs successfully it will print out some stats and "0", if it hits the bug it will print 123. At head, I get about 50% failure rate. The problem appears to be a bug in the way turbofan is optimizing this method: JSCompiler_StaticMethods_runOneJobAtPriority_ If I remove the try/catch in the method (allowing crankshaft to handle it) or filter it out using --turbo-filter, the test always succeeds. Unfortunately the test also succeeds if I use --trace-turbo, so I don't know for sure what's going wrong.
,
Nov 14 2016
Thanks, I have already managed to identify the misbehaving function in the perf test (function uK in tracked.js) and extract a one-line snippet that mis-compiles. It is pretty clear that we somehow miscompile a property store, but I still have to investigate the exact details. A d8 repro is certainly very helpful!
,
Nov 14 2016
I should say that I have exactly the same problem with --trace-turbo - it never reproduces with that. Strangely, with --trace-turbo we somehow do not even get to compile the function. Tomorrow, I will add some manual tracing to see what is happening.
,
Nov 14 2016
Thanks Ryan and Jaro for pursuing this. Since this is a serious regression and the branch point is close I'm marking it as a release blocker for M56.
,
Nov 15 2016
The following revision refers to this bug: https://chromium.googlesource.com/v8/v8.git/+/1900760e8fb2bb8682d44ab3a58f8196230598da commit 1900760e8fb2bb8682d44ab3a58f8196230598da Author: jarin <jarin@chromium.org> Date: Tue Nov 15 13:16:19 2016 [turbofan] Fix deopt check for storing into constant field. BUG= chromium:626986 Review-Url: https://codereview.chromium.org/2503863002 Cr-Commit-Position: refs/heads/master@{#40990} [modify] https://crrev.com/1900760e8fb2bb8682d44ab3a58f8196230598da/src/compiler/js-native-context-specialization.cc [add] https://crrev.com/1900760e8fb2bb8682d44ab3a58f8196230598da/test/mjsunit/compiler/regress-626986.js
,
Nov 15 2016
There is a silly bug in compilation for assignment to a so-far-constant field. Repro is below. Unfortunately, the bug is in both M54 and M55.
// Flags: --allow-natives-syntax --turbo
function g() {
return 42;
}
var o = {};
function f(o, x) {
o.f = x;
}
f(o, g);
f(o, g);
f(o, g);
%OptimizeFunctionOnNextCall(f);
f(o, function() { return 0; });
assertEquals(0, o.f());
,
Nov 15 2016
,
Nov 15 2016
[Auto-generated comment by a script] We noticed that this issue is targeted for M-55; it appears the fix may have landed after branch point, meaning a merge might be required. Please confirm if a merge is required here - if so add Merge-Request-55 label, otherwise remove Merge-TBD label. Thanks.
,
Nov 15 2016
,
Nov 15 2016
Your change meets the bar and is auto-approved for M55 (branch: 2883)
,
Nov 15 2016
Thanks Jaro for getting to the bottom of this!
,
Nov 15 2016
Note, Jaro, could you please post here when you think that V8 has rolled forward in Chromium to include your fix? We would like to remove the "flaky" expectation from src/content/test/gpu/gpu_tests/maps_expectations.py . (It has to be replaced with "pass".) Thanks.
,
Nov 16 2016
Our tooling says my CL is in v8 5.6.319, which should be in Chromium revision 432223 (and version 56.0.2921.0).
,
Nov 18 2016
The following revision refers to this bug: https://chromium.googlesource.com/v8/v8.git/+/1900760e8fb2bb8682d44ab3a58f8196230598da commit 1900760e8fb2bb8682d44ab3a58f8196230598da Author: jarin <jarin@chromium.org> Date: Tue Nov 15 13:16:19 2016 [turbofan] Fix deopt check for storing into constant field. BUG= chromium:626986 Review-Url: https://codereview.chromium.org/2503863002 Cr-Commit-Position: refs/heads/master@{#40990} [modify] https://crrev.com/1900760e8fb2bb8682d44ab3a58f8196230598da/src/compiler/js-native-context-specialization.cc [add] https://crrev.com/1900760e8fb2bb8682d44ab3a58f8196230598da/test/mjsunit/compiler/regress-626986.js
,
Nov 18 2016
The following revision refers to this bug: https://chromium.googlesource.com/v8/v8.git/+/e7cacef1983bd4cec7ab49f569cba984eac520f8 commit e7cacef1983bd4cec7ab49f569cba984eac520f8 Author: Jaroslav Sevcik <jarin@chromium.org> Date: Fri Nov 18 11:57:38 2016 Merged: [turbofan] Fix deopt check for storing into constant field. Revision: 1900760e8fb2bb8682d44ab3a58f8196230598da BUG= chromium:626986 LOG=N NOTRY=true NOPRESUBMIT=true NOTREECHECKS=true R=bmeurer@chromium.org Review URL: https://codereview.chromium.org/2517543002 . Cr-Commit-Position: refs/branch-heads/5.5@{#48} Cr-Branched-From: 3cbd5838bd8376103daa45d69dade929ee4e0092-refs/heads/5.5.372@{#1} Cr-Branched-From: b3c8b0ce2c9af0528837d8309625118d4096553b-refs/heads/master@{#40015} [modify] https://crrev.com/e7cacef1983bd4cec7ab49f569cba984eac520f8/src/compiler/js-native-context-specialization.cc [add] https://crrev.com/e7cacef1983bd4cec7ab49f569cba984eac520f8/test/mjsunit/compiler/regress-626986.js
,
Nov 18 2016
Per comment #63, this is already merged to M55. Hence, removing "Merge-Approved-55" label.
,
Nov 20 2016
The following revision refers to this bug: https://chromium.googlesource.com/chromium/src.git/+/8ce3d69f89935fab3a8fcf267d0b318fa871cd77 commit 8ce3d69f89935fab3a8fcf267d0b318fa871cd77 Author: kbr <kbr@chromium.org> Date: Sun Nov 20 05:37:44 2016 Remove flaky expectation for Maps test. BUG= 626986 CQ_INCLUDE_TRYBOTS=master.tryserver.chromium.linux:linux_optional_gpu_tests_rel;master.tryserver.chromium.mac:mac_optional_gpu_tests_rel;master.tryserver.chromium.win:win_optional_gpu_tests_rel;master.tryserver.chromium.android:android_optional_gpu_tests_rel TBR=zmo@chromium.org Review-Url: https://codereview.chromium.org/2516113002 Cr-Commit-Position: refs/heads/master@{#433448} [modify] https://crrev.com/8ce3d69f89935fab3a8fcf267d0b318fa871cd77/content/test/gpu/gpu_tests/maps_expectations.py
,
Jan 10 2017
Needed for Node.js v7.x (V8 5.4) – we can handle the backport on the Node.js side. Comparing the test in the CL with the comment #54, isn't the checked-in test missing the --turbo flag?
,
Jun 1 2018
Node 7.x is EOL. Removing the NodeJS-Backport-Approved label. |
|||||||||||||||||||||||||||||||
►
Sign in to add a comment |
|||||||||||||||||||||||||||||||
Comment 1 by nek...@chromium.org
, Jul 13 2016Status: WontFix (was: Untriaged)