New issue
Advanced search Search tips
Note: Color blocks (like or ) mean that a user may not be available. Tooltip shows the reason.
Starred by 5 users

Issue metadata

Status: Fixed
Closed: Feb 28
EstimatedDays: ----
NextAction: ----
OS: Chrome
Pri: 1
Type: ----

Sign in to add a comment

BuildPackages failing to build chromeos-chrome

Project Member Reported by, Oct 31 2017

Issue description

Builders failed on: 
- asuka-release:
- auron_paine-release:
- auron_yuna-release:
- banjo-release:
- banon-release:
- betty-arc64-release:
- betty-release:
- bob-release:
- buddy-release:
- butterfly-release:
- candy-release:
- caroline-release:
- cave-release:
- celes-release:
- chell-release:
- clapper-release:
- coral-release:
- cyan-release:
- daisy-release:
- daisy_skate-release:
- daisy_spring-release:
- edgar-release:
- elm-release:
- enguarde-release:
- eve-release:
- expresso-release:
- falco-release:
- falco_li-release:
- fizz-release:
- gandof-release:
- glimmer-release:
- gnawty-release:
- gru-release:
- guado-release:
- hana-release:
- heli-release:
- jecht-release:
- kahlee-release:
- kefka-release:
- kevin-release:
- kip-release:
- lars-release:
- leon-release:
- link-release:
- lulu-release:
- lumpy-release:
- mccloud-release:
- monroe-release:
- nefario-release:
- newbie-release:
- ninja-release:
- novato-arc64-release:
- novato-release:
- nyan_big-release:
- nyan_blaze-release:
- nyan_kitty-release:
- oak-release:
- orco-release:
- panther-release:
- parrot-release:
- parrot_ivb-release:
- peach_pi-release:
- peach_pit-release:
- peppy-release:
- poppy-release:
- pyro-release:
- quawks-release:
- reef-release:
- reef-uni-release:
- reks-release:
- relm-release:
- rikku-release:

Both 62 and 63 release branches failed similarly, e.g.

For 62 specifically the pre flight passed building the same version of Chrome it failed on hours before the release array failed.
Starting at 7 we see passed and at 10pm we see failure part way through building Chrome in

Last night I was also not able to load up LogDog for these builds, maybe there was a wider infra outage that impacted our ability to use Goma?

Comment 2 Deleted  looks suspiciously like this issue.

A new version of goma was released at 10:20pm last night:

Comment 4 by, Oct 31 2017

A bunch of Chrome/Chromium PFQ informational builds have same failures.

Comment 6 Deleted

Currently waiting for someone from goma to be available; I've contacted the mailing list with the information I have.

Hopefully we can figure out what's going on together, and if not, we can figure out how to roll back so we're able to make release builds.
+backup oncall

Comment 9 by, Oct 31 2017

For the sake of posterity, copying here what was said in a chat thread:

The goma release bug[1] mentions releasing new versions of the client and the server at the same time, so it's not clear to those of us on the outside whether it is safe to rollback the client without touching the server.

It's also not clear if rolling back the release CL[2] is appropriate / works, or if the right thing to do would be to release a new version which is identical to an older version.

For now, folks in MTV time just don't have enough information or insight to be able to fix this on our own without making things potentially worse, so we need to wait for Tokyo to come online to help.

[1] b/67872697
[2] cl/173992453
Components: Infra>Goma

Comment 11 by, Nov 1 2017

Maybe we should disable goma on branches in chromeos-chrome.ebuild? It still builds fine locally without it.

Comment 12 by, Nov 1 2017

rollback goma client to version=141.
see it fixes the issue.
master-chromium-pfq is green again.

Let's see if release builders will also follow.
Labels: -Pri-0 Pri-1
The build on bots looks successfully done, so the priority is reduced to 1.

Thanks to hashimoto@, I could successfully reproduced this locally.
So, it looks like sandox issue, since

% emerge-$BOARD chromoeos-chrome

hits the same issue, but

% FEATURES="-usersandbox" emerge-$BOARD chromeos-chrome

does not. Also, I made sure;
- compiler_proxy.INFO does not have any logs about starting Task.
- It looks like running one error command manually (outside of sandbox) in chroot successfully works.

goma-team, any recent changes, which potentially hits sandbox?

sandbox has no logs when failure happens?

Re #15. I couldn't find. vapier@, could you help?
sandbox tends to write to stderr when it fails.  if that's not available, then logs can get lost.  also, the Gentoo ebuild sandbox is in-process by overriding C lib symbols, so it might be causing weird problems for programs that rely/assume certain behavior.

you can run `sandbox` to start an interactive shell and run commands inside of a new sandbox to test behavior.

Comment 18 by, Nov 2 2017

in version 141 to 142, there are no major code change in gomacc.
deps update was boringssl, which gomacc doesn't depend on.

so, I'm wondering lld might generate binary that couldn't be handled by sandbox well.
I'll try to investigate the detailed reason...
I changed lld is only used for linux, but gomacc is also linked by lld for chromeos?
feel free to reassign to me if you're not the right owner. This isn't causing a problem on the tree anymore, so I feel like it shouldn't be sheriff-owned, but I can help if you aren't the right person.
Re #20, we use linux one for chromeos, specifically on bots.
#20, #22

This is about in goma. You added os == "linux", but it holds on chromeos, too?
As far as I know, chromeos goma client has the same build configuration as linux, but base glibc version etc. are different.
Ah, I see. Then lld may be related to this. I thought that gomacc is made by 'Goma ChromeOS GN' bot.
#24, it's right but I believe your written if-condition holds on 'Goma ChromeOS GN' bot, too.
Sorry, this is wrong. Since Goma ChromeOS GN is not using clang, lld should not be used there X(

In conclusion, lld may be related to this issue or not?
Which gomacc caused this issue, gomacc built on ChromeOS builder or Linux builder?
I confirmed gomacc linked with lld caused this issue.

$ cros_sdk --goma_dir=~/goma --chrome_root=~/work/chrome

--- with lld

(cr) ((85217e1...)) shinyak@shinyak ~/trunk/src/scripts $ ~/goma/ start
using /tmp/goma_shinyak as tmpdir
GOMA version dd02c2ad42f2c7f6d00176c9ffa3df6aabfe7e15@1509336204
waiting for compiler_proxy...
compiler proxy (pid=448) status: ok

Now goma is ready!

(cr) ((85217e1...)) shinyak@shinyak ~/trunk/src/scripts $ ~/goma/gomacc port

(cr) ((85217e1...)) shinyak@shinyak ~/trunk/src/scripts $ /usr/bin/sandbox
============================= Gentoo path sandbox ==============================
Detection of the support files.
Verification of the required files.
Setting up the required environment variables.
The protected environment has been started.
Process being started in forked instance.

shinyak@shinyak ~/trunk/src/scripts $ ~/goma/gomacc port
Segmentation fault (core dumped)

--- without lld

(cr) ((85217e1...)) shinyak@shinyak ~/trunk/src/scripts $ /usr/bin/sandbox
============================= Gentoo path sandbox ==============================
Detection of the support files.
Verification of the required files.
Setting up the required environment variables.
The protected environment has been started.
Process being started in forked instance.

shinyak@shinyak ~/trunk/src/scripts $ ~/goma/gomacc port


1. We will release the next version of goma without linking lld.
2. I'll prioritize chromeos goma canary to detect the failure earlier


Project Member

Comment 31 by, Nov 2 2017

The following revision refers to this bug:

commit e7483c354c47b8299674ea770fce7de158b5fa14
Author: Shinya Kawanaka <>
Date: Thu Nov 02 07:01:46 2017

Comment 32 by, Nov 6 2017

is this related with ?

Comment 33 by, Nov 6 2017

No, I don't think that is related. So far I've narrowed the problem down to this:

$ cat exit.s
.globl _start
movq $60, %rax
movq $0, %rdi

.globl exit
.globl __libc_start_main
$ make exit.o
as   -o exit.o exit.s
$ ld.lld exit.o -o ~/.subversion/exit   --hash-style=gnu  -dynamic-linker /lib64/ /lib/x86_64-linux-gnu/ 

In the sandbox:

$ .subversion/exit
Segmentation fault (core dumped)

I needed the --hash-style=gnu flag as well as the references to at least two DSO symbols to reproduce the problem, so I suspect that either lld is creating an invalid GNU-style hash table in some cases or the sandbox isn't reading it correctly.
hmm, the sandbox ELF parser does walk the symbol table to try and find the end, and unfortunately that uses a heuristic because the ELF format provides no way of getting that info up front.  lld might produce binaries that the heuristic doesn't handle.

i guess i'll have to bite the bullet and implement GNU hash table parsing, and if that fails, add a sanity check to make sure we don't walk outside the bounds of the registered memory region.

Comment 35 by, Nov 6 2017

The comment in says that it uses the sysv hash to get the number of symbols if the hash table exists.

So a quick "fix" is to pass "-hash-style=both" to lld so that it generates both sysv and gnu style hashes in an output file.

Comment 36 by, Nov 6 2017

We ran into a similar problem in the CFI runtime library, and our solution at the time was to bounds check symbol name references.

Although from looking at the GNU hash table format it does seem possible to determine the number of symbols in the symbol table without needing to hash symbol names.

Comment 37 by, Nov 6 2017

Fundamentally it feels more like a problem of the ELF spec itself. It is perhaps too late to fix it, but if ELF had `DT_SYMSZ` (or whatever it is called) field in the .dynamic section, we wouldn't have had to infer it from a hash table.
yeah, if there was an explicit signal like DT_SYMSZ ("Size in bytes of the symbol table"), everything would be fine :/

adding a check to sandbox akin to the CFI library probably wouldn't be too bad
There seems not be the builders whose name are shown in #0?

Can we think the issue obsoleted?
ah, yeah, I think this is obsolete.
Status: Fixed (was: Available)
now goma-client is not linking with lld

Sign in to add a comment