Investigate linker memory consumption on Windows |
||
Issue descriptionlinker memory consumption on VC++ 2015 gn builds has gone up significantly over the last six months. VC++ 2017 linker memory usage is similar to VC++ 2015. clang linker memory usage is significantly lower, but clang builds are slower. I was running some manual build tests and I noticed that linker memory consumption seems to have increased a lot in the last six months. I decided to run overnight tests using VC++ 2015, VC++ 2017, and clang. I previously did gyp versus gn tests on June 23, 2016, to compare build times and linker commit maximums. At the time the maximum commits for links (from targets like unit_tests.exe, sync_performance_tests.exe, etc.) on 32-bit release non-component builds was about 10.5 to 11.1 GB on gn builds. Those same targets now consume about 15 GB of commit. That's enough to significantly harm build times on most machines, since we assume 5 GB per link. There were some gn build optimizations since June 23rd so I would have expected a bit of a dip before Chrome growth gradually moved the numbers up. The increase seems surprisingly large. Both sets of numbers come from VC++ 2015. VC++ 2017 gives almost identical results. clang uses less memory. The gyp numbers in the June 23 test were about 15% lower than the gn numbers, FWIW. I measure commit rather than working set so that paging does not affect the results. I've verified that commit is a good proxy for physical RAM consumed when no paging is occurring. I'm not sure what to do with this information. Switching to clang solves the memory problem but (currently) worsens build times. is_win_fastlink and component builds help a lot - I need to use is_win_fastlink more. Because linker memory consumption can force us to restrict linker parallelism it may be important to understand what has happened. A good first step would be to run some tests over, say, the last half-dozen Chrome releases, including across the gyp/gn transition, to better see what the trend is. A (Google visible only) spreadsheet showing some initial results is here: https://docs.google.com/spreadsheets/d/1XugeyGO2AWat8IfS0teiFFDRAdmPLwwXz-cxSm39DIA/edit#gid=1351099507
,
Jun 28 2018
I did some tests comparing link.exe and lld-link.exe's memory consumption. I looked at private commit size because that is unaffected by memory pressure. That is, working set is a useless comparison method unless you only have one link running at a time because it is capped at amount-of-ram/num-links. I found that lld used about 18% less commit than link.exe, peaking at 6.7 GB for unit_tests.exe. Most links used less than 5 GB of commit. lld often had 12-15 GB of working set but that must have mostly been shared working set, presumably from the input files. This merely indicates that linking still benefits from a fast disk and lots of memory to cache as much input data as possible, but it doesn't indicate page-file thrashing of any sort. I can't repro the extremely high commit amounts reported initially. Those may have been working set numbers instead of commit numbers. CCing zturner in case he's interested.
,
Jun 28 2018
Latest data (Google only) is here: https://docs.google.com/spreadsheets/d/1DPh6gA1kGQWYQHH7KndVD2W0bUFSv1DFmNbnCn7_dq8/edit?usp=sharing |
||
►
Sign in to add a comment |
||
Comment 1 by brucedaw...@chromium.org
, Nov 22 2017