New issue
Advanced search Search tips

Issue 738186 link

Starred by 4 users

Issue metadata

Status: Fixed
Owner:
Closed: Sep 2017
Cc:
Components:
EstimatedDays: ----
NextAction: ----
OS: ----
Pri: 1
Type: Bug

Blocked on:
issue 761691

Blocking:
issue 745862



Sign in to add a comment

Updated ninja needed

Project Member Reported by brucedaw...@chromium.org, Jun 29 2017

Issue description

The depot_tools copy of ninja.exe was last updated on November 9th (to 1.7.2). Since then there has been significant optimization work done on ninja and we are missing out on those optimizations. I did some build tests with the checked in version of ninja.exe and a tip-of-tree version. These are the gn args that I used:

is_component_build = false
is_debug = true
target_cpu= "x86"
enable_nacl = false
use_goma = true

I did these tests on a Z620 and these are the build times in seconds:
508.8
429.9
589.7
427.9
913.8
412.7
521.0
414.6
562.3
408.7

I alternated between the current ninja.exe and the latest version. The difference is easily visible, but more so if I separate them out and sort them:

Current:
508.8
521.0
562.3
589.7
913.8

Tip of tree:
408.7
412.7
414.6
427.9
429.9

If we compare the fastest build with the current ninja.exe to the slowest build with the latest version then we see an 18% speedup. If we discard the two slowest runs (the 913.8 was clearly anomalous) an compare the averages we get a speedup of 31%.

The percentage speedups would be much lower on other build configurations (non-goma, higher symbol levels, etc.) but the improvements are dramatic enough that we should clearly upgrade.

I have not measured improvements on other platforms, but I believe that the optimizations are portable and should help everywhere.

 
Labels: -Pri-3 Pri-2
Status: (was: Assigned)

Comment 2 by thakis@chromium.org, Jun 29 2017

Yeah, I need to make a new release. I'm out next week, but I'll look at it after that. I've merged most stuff upstream I want to include already.
This is also needed to fix this failure:

Note: including file: e:\b\c\b\win\src\out\release\gen\third_party\webkit\source\bindings\core\v8\custom\../../../../../../../blink/core/HTMLElementTypeHelpers.cpp
Note: including file: e:\b\c\b\win\src\out\release\gen\third_party\webkit\source\bindings\core\v8\custom\../../../../../../../blink/core/CSSValueKeywords.cpp
Note: including file:  ../../third_party/WebKit/Source\core/css/HashTools.h
too many path components[5452/8019] CXX obj/third_party/WebKit/Source/core/testing/DummyModulator.obj

https://luci-logdog.appspot.com/v/?s=chromium%2Fbb%2Ftryserver.chromium.win%2Fwin_chromium_rel_ng%2F482153%2F%2B%2Frecipes%2Fsteps%2Fcompile__with_patch_%2F0%2Fstdout

The fix was landed in January (https://github.com/ninja-build/ninja/pull/1181) so a new version of ninja will resolve this issue as well as improve performance.
If we run into that issue, something else is amiss. We should not generate paths that deep. (32 after canonicalization should be enough for everyone etc)
It is apparently triggered by jumbo builds, which are still experimental. Daniel will try to come up with a workaround.
Cc: tikuta@chromium.org

Comment 7 by brat...@opera.com, Jul 6 2017

Could be this one:
e:\b\c\b\win\src\out\release\gen\third_party\webkit\source\bindings\core\v8\custom\../../../../../../../../../../third_party/WebKit/Source/bindings/core/v8/serialization/SerializedScriptValue.cpp

34 parts of that path. I will see if I can generate the files less deep in the "gen" structure since every path there will be 2 path elements, one to get down, one to get up again.

Comment 8 by thakis@chromium.org, Jul 11 2017

It's 32 after canonicalization iirc, so 

e:\b\c\b\win\src\out\release\gen\third_party\webkit\source\bindings\core\v8\custom\../../../../../../../../../../third_party/WebKit/Source/bindings/core/v8/serialization/SerializedScriptValue.cpp

would be

e:\b\c\b\win\src/third_party/WebKit/Source/bindings/core/v8/serialization/SerializedScriptValue.cpp

which has only 13 components.

Comment 9 by brat...@opera.com, Jul 11 2017

I did a change that removed two elements from the path above and the error went away. 

To:
e:\b\c\b\win\src\out\release\gen\third_party\webkit\source\bindings\core\v8\../../../../../../../../../third_party/WebKit/Source/bindings/core/v8/serialization/SerializedScriptValue.cpp

I didn't see any other path that was long and did not have a lot of ../../.. in them so could ninja use that code on non-canonicalized paths as well?
Hi, let me show performance stats on Z840 linux.

I compared 3 build times of ninja on depot_tools and git head.
restart goma client and clean before each build.

My args.gn is
"""
is_debug = false
use_goma = true
is_component_build = true
enable_nacl = false
strip_absolute_paths_from_debug_symbols = true
use_lld = true
"""

build command is
$ time ninja -d stats -C out/Release -j 1000 chrome

3 build time of ninja in depot_tools
2m11.935s, 2m27.528s, 2m49.868s

3 build time of ninja in git head
1m26.433s, 1m10.645s, 1m12.085s


This difference is come from slowness of posix_spawn (see stats of StartEdge).
https://github.com/ninja-build/ninja/commit/5153cedfd7720547489b6b2e5d5e4485def621c9
This change improved ninja on linux lot.

stats from depot_tools ninja.
metric                  count   avg (us)        total (ms)
.ninja parse            3061    672.9           2059.6
canonicalize str        814059  0.1             99.4
canonicalize path       6536862 0.1             523.7
lookup node             6635307 0.2             1391.8
.ninja_log load         1       33201.0         33.2
.ninja_deps load        1       163675.0        163.7
node stat               84633   1.5             129.3
depfile load            986     3.9             3.9
StartEdge               29364   3712.9          109024.2
FinishCommand           29364   429.6           12614.4

path->node hash load 0.67 (131801 entries / 196613 buckets)

real    2m11.935s
user    8m27.904s
sys     4m31.812s


stats from git head ninja.
metric                  count   avg (us)        total (ms)
.ninja parse            3061    835.6           2557.6
canonicalize str        814059  0.1             118.7
canonicalize path       6536862 0.1             802.2
lookup node             6635307 0.3             1830.1
.ninja_log load         1       40969.0         41.0
.ninja_deps load        1       185270.0        185.3
node stat               84633   1.6             131.9
depfile load            986     4.0             3.9
StartEdge               29364   1000.2          29370.9
FinishCommand           29364   270.8           7951.8

path->node hash load 0.67 (131801 entries / 196613 buckets)

real    1m26.433s
user    11m5.964s
sys     1m30.228s


Is this good time to update depot_tools ninja?
Without faster ninja, goma's improvement can not improve build speed sometimes.

What is blocking this issue?
It just needs doing. I'll get to it, hopefully this month.
bratell: This suggests that it is 32 after normalization:

Nicos-MacBook-Pro:foo thakis$ cat build.ninja
rule cc
  deps = gcc
  depfile = $out.d
  command = echo "$out.d: bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../foo.txt" > $out.d; touch $out

build foo.o: cc foo.c
Nicos-MacBook-Pro:foo thakis$ ninja
[1/1] echo "foo.o.d: bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/......./bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../foo.txt" > foo.o.d; touch foo.o
Nicos-MacBook-Pro:foo thakis$ ninja -t deps
foo.o: #deps 1, deps mtime 1500412914 (VALID)
    foo.txt


With msvc deps:

Nicos-MacBook-Pro:foo thakis$ cat build.ninja
rule cc
  deps = msvc
  depfile = $out.d
  command = echo "Note: including file: bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../foo2.txt"; touch $out

build foo.o: cc foo.c
Nicos-MacBook-Pro:foo thakis$ ninja
[1/1] echo "Note: including file: bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../...r/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../bar/../foo2.txt"; touch foo.o
Nicos-MacBook-Pro:foo thakis$ ninja -t deps
foo.o: #deps 1, deps mtime 1500413304 (VALID)
    foo2.txt


And on a Windows box, CanonicalizePath() is supposed to convert \ to / too (but I don't have one at hand atm, so can't demo that right now).
ping?

Comment 15 by brat...@opera.com, Aug 4 2017

Blocking: 740177
Blocking: -740177
See above, we don't understand why jumbo builds rum into to many path issues
Ignoring the jumbo builds and the path separator issue, an updated version of ninja.exe will, in extreme cases, improve build speeds by 31% (see original comment). In most cases the improvement will be smaller than 31% but still.

I've hacked up my local config so that I get to use the fast version already, but it's a bit ugly and it doesn't help others.

I absolutely thing we should update ninja. But with jumbo builds, I'm afraid that we'll find out that we're hitting the current 32-long limit due to some other bug where we don't canonicalize paths correctly, and if we just roll now, chances are we'll find that new bug after the roll and will have to roll again. So I'd like to first understand what exactly's going on there.
I'm not sure what is your actual concern.
What are you expecting on ninja's behavior? Current ninja has behavior you aren't expecting?

In my recognition, the problem has occurred due to long path in include directive like following when enabling jumbo build.
e.g.
#include "../../../../../../../../../third_party/WebKit/Source/core/layout/svg/LayoutSVGBlock.cpp" in out/Release/gen/third_party/WebKit/Source/core/layout/svg/svg_layout_jumbo_1.cc
Long path in #include will be unnecessary if we add include path when compiling.

And cl.exe shows filepath like `(abspath to including file) + (path in #include as is)`.
Then ninja's clparser passes the path having more than 32 component to CanonicalizePath in
https://github.com/ninja-build/ninja/blob/7bbc708ff08f5660f4cff4b3e8c675bec428a1f2/src/includes_normalize-win32.cc#L165
This caused ninja's internal error.

If I don't understand your concern, please let me know.
I'll investigate that.

See comment 13. At least on Linux, what counts is post-normalization number of slashes. That's how things are supposed to work on Windows too I think.
But this test shows that CanonicalizePath does not allow too many path component for input path?
https://github.com/ninja-build/ninja/blob/7bbc708ff08f5660f4cff4b3e8c675bec428a1f2/src/util_test.cc#L295

Ping?

Nico, if I follow below document, can I ask your approval to release new ninja (v1.7.3 or v1.8.0)?
https://github.com/ninja-build/ninja/blob/master/RELEASING
https://bugs.chromium.org/p/chromium/issues/detail?id=663749

In particular, the new ninja makes builds noticeably faster, so it would be great to get it. I'm running a fast version of ninja.exe in a branch in my local depot_tools but it gets a bit clumsy and it would be great to have everybody (and the build machines) have faster builds.
Labels: -Pri-2 Pri-1
Owner: tikuta@chromium.org
Status: Started
Chatting with Nico, I got the issue he pointed out.
For that issue, I made PR.
https://github.com/ninja-build/ninja/pull/1314

If the PR is merged, I will release new ninja instead of Nico, hopefully in the beginning of next week.

I got approval to release ninja from Nico.
Thanks Nico.

I start release process in the commit.
https://github.com/ninja-build/ninja/commit/327d6cee67dac252938cda82c91a6b4d65ef523b

From now, I follow instruction from Nico.

Stats including memory consumption in ninja 1.7.2 on linux when building chrome is
tikuta@tikuta:~/chromium/src$ /usr/bin/time -v ninja -C out/Release/ -j 1000                                                                                                                                       
ninja: Entering directory `out/Release/'
[49850/49850] STAMP obj/All.stamp
        Command being timed: "ninja -C out/Release/ -j 1000"
        User time (seconds): 814.86
        System time (seconds): 646.01
        Percent of CPU this job got: 489%
        Elapsed (wall clock) time (h:mm:ss or m:ss): 4:58.39
        Average shared text size (kbytes): 0
        Average unshared data size (kbytes): 0
        Average stack size (kbytes): 0
        Average total size (kbytes): 0
        Maximum resident set size (kbytes): 934680
        Average resident set size (kbytes): 0
        Major (requiring I/O) page faults: 11681
        Minor (reclaiming a frame) page faults: 48981214
        Voluntary context switches: 2376464
        Involuntary context switches: 136319
        Swaps: 0
        File system inputs: 56
        File system outputs: 5613976
        Socket messages sent: 0
        Socket messages received: 0
        Signals delivered: 0
        Page size (bytes): 4096
        Exit status: 0

Stats of commit 327d6cee67dac252938cda82c91a6b4d65ef523b
tikuta@tikuta:~/chromium/src$ /usr/bin/time -v ~/git/ninja/ninja -C out/Release/ -j 1000                                                                                                                           
ninja: Entering directory `out/Release/'
[49850/49850] STAMP obj/All.stamp
        Command being timed: "/usr/local/google/home/tikuta/git/ninja/ninja -C out/Release/ -j 1000"
        User time (seconds): 1143.57
        System time (seconds): 180.62
        Percent of CPU this job got: 1241%
        Elapsed (wall clock) time (h:mm:ss or m:ss): 1:46.65
        Average shared text size (kbytes): 0
        Average unshared data size (kbytes): 0
        Average stack size (kbytes): 0
        Average total size (kbytes): 0
        Maximum resident set size (kbytes): 937828
        Average resident set size (kbytes): 0
        Major (requiring I/O) page faults: 10912
        Minor (reclaiming a frame) page faults: 43244255
        Voluntary context switches: 2278384
        Involuntary context switches: 678565
        Swaps: 0
        File system inputs: 472
        File system outputs: 5615184
        Socket messages sent: 0
        Socket messages received: 0
        Signals delivered: 0
        Page size (bytes): 4096
        Exit status: 0

args.gn is
"""
is_debug = false
use_goma = true
is_component_build = true
enable_nacl = false
strip_absolute_paths_from_debug_symbols = true
use_lld = true
"""

goma's backend cache is sufficiently warmed in both build, and build performance is significantly improved.
And increase of peak memory usage is small.

I run fuzzing as next step.
https://github.com/ninja-build/ninja/blob/master/HACKING.md#using-afl-fuzz
Can you check this for an empty build too (don't touch anything, just run the command again after building, to measure time and memory use for ninja doing nothing but verifying that nothing needs to be done)? I usually do 3-5 empty builds (they only take a second or so) with each version and look at the one with the lowest time for each ninja.
Slowest empty build stats from master in 5 times stats.
This is slower but, I guess it is due to disk cache is not warmed when I run this command.

tikuta@tikuta:~/chromium/src$ /usr/bin/time -v ~/git/ninja/ninja -C out/Release/ -j 1000
ninja: Entering directory `out/Release/'
ninja: no work to do.
        Command being timed: "/usr/local/google/home/tikuta/git/ninja/ninja -C out/Release/ -j 1000"
        User time (seconds): 2.20
        System time (seconds): 1.01
        Percent of CPU this job got: 94%
        Elapsed (wall clock) time (h:mm:ss or m:ss): 0:03.41
        Average shared text size (kbytes): 0
        Average unshared data size (kbytes): 0
        Average stack size (kbytes): 0
        Average total size (kbytes): 0
        Maximum resident set size (kbytes): 430300
        Average resident set size (kbytes): 0
        Major (requiring I/O) page faults: 0
        Minor (reclaiming a frame) page faults: 163345
        Voluntary context switches: 797
        Involuntary context switches: 15
        Swaps: 0
        File system inputs: 49464
        File system outputs: 90392
        Socket messages sent: 0
        Socket messages received: 0
        Signals delivered: 0
        Page size (bytes): 4096
        Exit status: 0


Second slowest stats in master is below.

tikuta@tikuta:~/chromium/src$ /usr/bin/time -v ~/git/ninja/ninja -C out/Release/ -j 1000
ninja: Entering directory `out/Release/'
ninja: no work to do.
        Command being timed: "/usr/local/google/home/tikuta/git/ninja/ninja -C out/Release/ -j 1000"
        User time (seconds): 1.65
        System time (seconds): 0.49
        Percent of CPU this job got: 99%
        Elapsed (wall clock) time (h:mm:ss or m:ss): 0:02.15
        Average shared text size (kbytes): 0
        Average unshared data size (kbytes): 0
        Average stack size (kbytes): 0
        Average total size (kbytes): 0
        Maximum resident set size (kbytes): 350584
        Average resident set size (kbytes): 0
        Major (requiring I/O) page faults: 0
        Minor (reclaiming a frame) page faults: 125559
        Voluntary context switches: 2
        Involuntary context switches: 12
        Swaps: 0
        File system inputs: 0
        File system outputs: 0
        Socket messages sent: 0
        Socket messages received: 0
        Signals delivered: 0
        Page size (bytes): 4096
        Exit status: 0


Slowest empty build stats from ninja 1.7.2 in 5 times empty build.

tikuta@tikuta:~/chromium/src$ /usr/bin/time -v ninja -C out/Release/ -j 1000
ninja: Entering directory `out/Release/'
ninja: no work to do.
        Command being timed: "ninja -C out/Release/ -j 1000"
        User time (seconds): 1.69
        System time (seconds): 0.45
        Percent of CPU this job got: 99%
        Elapsed (wall clock) time (h:mm:ss or m:ss): 0:02.16
        Average shared text size (kbytes): 0
        Average unshared data size (kbytes): 0
        Average stack size (kbytes): 0
        Average total size (kbytes): 0
        Maximum resident set size (kbytes): 350584
        Average resident set size (kbytes): 0
        Major (requiring I/O) page faults: 0
        Minor (reclaiming a frame) page faults: 125227
        Voluntary context switches: 13
        Involuntary context switches: 20
        Swaps: 0
        File system inputs: 0
        File system outputs: 0
        Socket messages sent: 0
        Socket messages received: 0
        Signals delivered: 0
        Page size (bytes): 4096
        Exit status: 0

Ignoring cold disk cache case, empty build is also not different.
Time                peak memory
master: 0:02.15     350584
1.7.2:  0:02.16     350584
Oh, sorry, not slowest, fastest. With "lowest time" I meant "smallest time". As you say, the slowest number will be too slow due to caching effects. Taking the fastest time of several runs is an attempt to mitigate that.
Ah, I see.

Fastest between 5 time on ninja master is below

tikuta@tikuta:~/chromium/src$ /usr/bin/time -v ~/git/ninja/ninja -C out/Release/ -j 1000
ninja: Entering directory `out/Release/'
ninja: no work to do.
        Command being timed: "/usr/local/google/home/tikuta/git/ninja/ninja -C out/Release/ -j 1000"
        User time (seconds): 1.53
        System time (seconds): 0.42
        Percent of CPU this job got: 99%
        Elapsed (wall clock) time (h:mm:ss or m:ss): 0:01.96
        Average shared text size (kbytes): 0
        Average unshared data size (kbytes): 0
        Average stack size (kbytes): 0
        Average total size (kbytes): 0
        Maximum resident set size (kbytes): 352736
        Average resident set size (kbytes): 0
        Major (requiring I/O) page faults: 0
        Minor (reclaiming a frame) page faults: 92328
        Voluntary context switches: 1
        Involuntary context switches: 13
        Swaps: 0
        File system inputs: 0
        File system outputs: 0
        Socket messages sent: 0
        Socket messages received: 0
        Signals delivered: 0
        Page size (bytes): 4096
        Exit status: 0


Fastest between 5 time on ninja 1.7.2 is below

tikuta@tikuta:~/chromium/src$ /usr/bin/time -v ninja -C out/Release/ -j 1000
ninja: Entering directory `out/Release/'
ninja: no work to do.
        Command being timed: "ninja -C out/Release/ -j 1000"
        User time (seconds): 1.55
        System time (seconds): 0.42
        Percent of CPU this job got: 99%
        Elapsed (wall clock) time (h:mm:ss or m:ss): 0:01.98
        Average shared text size (kbytes): 0
        Average unshared data size (kbytes): 0
        Average stack size (kbytes): 0
        Average total size (kbytes): 0
        Maximum resident set size (kbytes): 350656
        Average resident set size (kbytes): 0
        Major (requiring I/O) page faults: 0
        Minor (reclaiming a frame) page faults: 92573
        Voluntary context switches: 13
        Involuntary context switches: 19
        Swaps: 0
        File system inputs: 0
        File system outputs: 0
        Socket messages sent: 0
        Socket messages received: 0
        Signals delivered: 0
        Page size (bytes): 4096
        Exit status: 0

Looks not changed.
Time                peak memory
master: 0:01.96     352736
1.7.2:  0:01.98     350656
Project Member

Comment 31 by bugdroid1@chromium.org, Sep 1 2017

The following revision refers to this bug:
  https://chromium.googlesource.com/chromium/src.git/+/4adc990dde147503fe14793dd5d5f4ee07f20f2e

commit 4adc990dde147503fe14793dd5d5f4ee07f20f2e
Author: Takuto Ikuta <tikuta@google.com>
Date: Fri Sep 01 07:57:51 2017

Roll depot_tools e081cbe5a..b2e961b11

$ git log --oneline \
  e081cbe5aae6b131955b6c3bd7308d95d650948f..b2e961b1171d9f27278461a0a3286ab89161368c
b2e961b1 Roll recipe dependencies (trivial).
a3b67ae2 gclient: predefine host_os
3414ad2c Roll recipe dependencies (trivial).
9957a266 Update CQ proto to latest version
235b70db [git cl format] Actually format metrics XML files (fix an indentation error)
fe2f380d Update help text for git-map-branches
6761b9df git-map-branches: support no-parent when showing subjects
0925fa70 Upgrade led to make it work with the new luci-system-account Kitchen cook flag introduced in cl 622079.
07efc5a7 [roll-dep] Allow logs in ANGLE rolls
03293e8d fetch_end_to_end_test: remove unused, broken configs
82aeb5bd [bot_update] Print Python version.
01b91069 Support new urls in git-cl
77962551 Roll recipe dependencies (trivial).
0870df28 gclient flatten: only print DEPS info referenced by recursedeps
c1a82cb1 fetch_end_to_end_test: set env CHROME_HEADLESS=1
447f8eaa Roll recipe dependencies (trivial).
de00a2d2 win_toolchain: remove old 7z binary
76a9d04a gclient flatten: include info about DEPS files: repos and paths
adae2a62 gclient flatten: correctness fixes for OS-specific recursedeps
fafd4739 Roll recipe dependencies (trivial).
6f1a5145 Bump cipd client to 1.7.1
1e94e893 win_toolchain: avoid dialog from GCM post git 2.14 on Windows
c54fa812 Include "ios" as a known "deps_os" value.
c69b32e1 gclient flatten: parse DEPS file for deps_os recursedeps
a76b5658 Roll recipe dependencies (trivial).
5ec7713b gclient flatten: fixes for --pin-all-deps

Bug:  738186 
Change-Id: Iac06e21608bb45941ca86dff0ab15f8c42b3f498
Reviewed-on: https://chromium-review.googlesource.com/646926
Reviewed-by: Dirk Pranke <dpranke@chromium.org>
Commit-Queue: Takuto Ikuta <tikuta@google.com>
Cr-Commit-Position: refs/heads/master@{#499150}
[modify] https://crrev.com/4adc990dde147503fe14793dd5d5f4ee07f20f2e/DEPS

Blockedon: 761691
Project Member

Comment 33 by bugdroid1@chromium.org, Sep 5 2017

The following revision refers to this bug:
  https://chromium.googlesource.com/chromium/tools/depot_tools/+/372e0fdbfaaf78df357e4d425bf7baa8613dbc9c

commit 372e0fdbfaaf78df357e4d425bf7baa8613dbc9c
Author: Takuto Ikuta <tikuta@google.com>
Date: Tue Sep 05 02:07:13 2017

Update ninja to v1.8.0.

New binaries for Mac, Linux32, Linux64, Windows.  Also update shell script.
See the bug for how these were built.
The main "new" thing is performance improvement when building chrome with goma.

After confirming the the behavior of ninja 1.8.0 on buildbot, I will roll ninja 1.8.1 includes fix for non-deterministic test.

TBR=dpranke@chromium.org
Bug:  761691 ,  738186 
Change-Id: I108996dccfdf5d95d8815d6cece5eba46f3e8178
Reviewed-on: https://chromium-review.googlesource.com/648372
Commit-Queue: Takuto Ikuta <tikuta@google.com>
Reviewed-by: Nico Weber <thakis@chromium.org>

[modify] https://crrev.com/372e0fdbfaaf78df357e4d425bf7baa8613dbc9c/ninja
[modify] https://crrev.com/372e0fdbfaaf78df357e4d425bf7baa8613dbc9c/ninja-mac
[modify] https://crrev.com/372e0fdbfaaf78df357e4d425bf7baa8613dbc9c/ninja.exe
[modify] https://crrev.com/372e0fdbfaaf78df357e4d425bf7baa8613dbc9c/ninja-linux64
[modify] https://crrev.com/372e0fdbfaaf78df357e4d425bf7baa8613dbc9c/ninja-linux32

Status: Fixed (was: Started)
Unfortunately, roll of ninja 1.8.0 does not change compile speed on buildbot.

This is some stats of ninja from goma canary builder.

Linux clobber builder
# 1.7.2
https://luci-milo.appspot.com/buildbot/chromium.fyi/Chromium%20Linux%20Goma%20Canary%20(clobber)/53424
metric           	count 	avg (us) 	total (ms)
.ninja parse     	3573  	1296.3  	4631.6
canonicalize str 	1601162	0.2     	258.6
canonicalize path	11767878	0.1     	1366.7
lookup node      	11767877	0.3     	3528.8
.ninja_log load  	1     	33.0    	0.0
.ninja_deps load 	1     	5.0     	0.0
node stat        	104608	0.8     	88.2
depfile load     	1605  	2.2     	3.6
StartEdge        	57080 	1644.0  	93841.7
FinishCommand    	57079 	796.9   	45486.6
path->node hash load 0.73 (143910 entries / 196613 buckets)

# 1.8.0
https://luci-milo.appspot.com/buildbot/chromium.fyi/Chromium%20Linux%20Goma%20Canary%20(clobber)/53425
metric           	count 	avg (us) 	total (ms)
.ninja parse     	3573  	1274.2  	4552.6
canonicalize str 	1601162	0.1     	238.0
canonicalize path	11767878	0.1     	1084.1
lookup node      	11767877	0.2     	2860.8
.ninja_log load  	1     	31.0    	0.0
.ninja_deps load 	1     	5.0     	0.0
node stat        	277125	1.5     	407.5
depfile load     	1605  	2.3     	3.7
StartEdge        	57080 	3164.4  	180621.3
FinishCommand    	57079 	206.5   	11788.6
path->node hash load 0.73 (143910 entries / 196613 buckets)

On linux, I thought POSIX_SPAWN_USEVFORK improves StartEdge time, but ninja v1.8.0 makes it worth on buildbot.
I don't know why.
This is plot of 1 hour average compile step of linux_chromium_rel_ng bot.
https://docs.google.com/spreadsheets/d/10QaUgs_QEF4rZtQB1VRXfmlqmTPtn1IK0QT68Rr4Lyc/edit#gid=749889813

Windows clober builder
# 1.7.2
https://luci-milo.appspot.com/buildbot/chromium.fyi/CrWin7Goma%28clbr%29/37163
metric           	count 	avg (us) 	total (ms)
.ninja parse     	3835  	2191.4  	8403.9
canonicalize str 	974198	0.1     	54.2
canonicalize path	20802000	0.0     	571.2
lookup node      	18701600	0.1     	1530.0
.ninja_log load  	1     	133.0   	0.1
.ninja_deps load 	1     	22.0    	0.0
node stat        	115473	4.8     	559.9
depfile load     	1728  	19.0    	32.8
StartEdge        	65518 	1150.7  	75393.3
FinishCommand    	65515 	6401.4  	419385.3
path->node hash load 0.59 (155501 entries / 262144 buckets)

# 1.8.0
https://luci-milo.appspot.com/buildbot/chromium.fyi/CrWin7Goma%28clbr%29/37164
metric           	count 	avg (us) 	total (ms)
.ninja parse     	3835  	1269.9  	4869.9
canonicalize str 	974198	0.0     	3.0
canonicalize path	20802000	0.0     	251.4
lookup node      	18701600	0.1     	1081.2
.ninja_log load  	1     	91.0    	0.1
.ninja_deps load 	1     	22.0    	0.0
node stat        	317189	14.3    	4523.4
depfile load     	1728  	15.9    	27.6
StartEdge        	65518 	897.7   	58814.2
FinishCommand    	65515 	1604.6  	105123.1
CLParser::Parse  	42727 	1731.4  	73976.4
path->node hash load 0.59 (155501 entries / 262144 buckets)

Time to FinishCommand is improved significantly, but that looks not affecting overall performance of compile step.

This is plot of 1 hour average compile step of win_chromium_rel_ng bot.
https://docs.google.com/spreadsheets/d/1ThP1uPJQ4LAuByVVPd9obeVoYihhW5LS56eumxN0dv8/edit#gid=733422157


So, improvement on ninja 1.8.0 is efficient only on high performance machine like Z840.


Anyway, thank you Nico letting me release updated ninja 1.8.0, this will give faster build to engineers using goma.
Blocking: 745862

Sign in to add a comment