Issue metadata
Sign in to add a comment
|
cros_sdk ssh: file descriptor passing not supported |
||||||||||||||||||||||||
Issue descriptionObserved in the following VMTest failure, ssh multiplexing failed: https://stainless.corp.google.com/browse/chromeos-image-archive/betty-arcnext-pre-cq/R72-11268.0.0-b3139604/vm_test_results_2/smoke/test_harness/results-03-cheets_CTS_P.9.0_r4.x86.CtsAdminTestCases/ 11/16 15:04:47.612 INFO | tradefed_test:0090| Hostname: localhost:9228 11/16 15:04:47.632 DEBUG| ssh_host:0310| Running (ssh) 'cat "/etc/lsb-release"' from '_verify_hosts|<genexpr>|get_release_builder_path|_get_lsb_release_content|run|run_very_slowly' 11/16 15:04:47.668 ERROR| utils:0287| [stderr] mm_send_fd: file descriptor passing not supported 11/16 15:04:47.670 ERROR| utils:0287| [stderr] mux_client_request_session: send fds failed 11/16 15:04:47.673 WARNI| test:0606| The test failed with the following exception Traceback (most recent call last): File "/build/betty-arcnext/usr/local/build/autotest/client/common_lib/test.py", line 567, in _exec _cherry_pick_call(self.initialize, *args, **dargs) File "/build/betty-arcnext/usr/local/build/autotest/client/common_lib/test.py", line 715, in _cherry_pick_call return func(*p_args, **p_dargs) File "/build/betty-arcnext/usr/local/build/autotest/server/cros/tradefed_test.py", line 91, in initialize self._verify_hosts() File "/build/betty-arcnext/usr/local/build/autotest/server/cros/tradefed_test.py", line 168, in _verify_hosts for host in self._hosts) File "/build/betty-arcnext/usr/local/build/autotest/server/cros/tradefed_test.py", line 168, in <genexpr> for host in self._hosts) File "/build/betty-arcnext/usr/local/build/autotest/server/hosts/cros_host.py", line 945, in get_release_builder_path lsb_release_content=self._get_lsb_release_content()) File "/build/betty-arcnext/usr/local/build/autotest/server/hosts/cros_host.py", line 925, in _get_lsb_release_content 'cat "%s"' % client_constants.LSB_RELEASE).stdout.strip() File "/build/betty-arcnext/usr/local/build/autotest/server/hosts/ssh_host.py", line 335, in run return self.run_very_slowly(*args, **kwargs) File "/build/betty-arcnext/usr/local/build/autotest/server/hosts/ssh_host.py", line 324, in run_very_slowly ssh_failure_retry_ok) File "/build/betty-arcnext/usr/local/build/autotest/server/hosts/ssh_host.py", line 268, in _run raise error.AutoservRunError("command execution error", result) AutoservRunError: command execution error * Command: /usr/bin/ssh -a -x -o ControlPath=/tmp/_autotmp_GTENXhssh-master/socket -o Protocol=2 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o BatchMode=yes -o ConnectTimeout=30 -o ServerAliveInterval=900 -o ServerAliveCountMax=3 -o ConnectionAttempts=4 -l root -p 9228 localhost "export LIBC_FATAL_STDERR_=1; if type \"logger\" > /dev/null 2>&1; then logger -tag \"autotest\" \"server[stack::get_release_builder_path|_get_lsb_release_content|run] -> ssh_run(cat \\\"/etc/lsb-release\\\")\";fi; cat \"/etc/lsb-release\"" Exit status: 255 Duration: 0.00887703895569
,
Nov 17
,
Nov 17
I found that a similar error in the past ( issue 899490 ) was related to cros_sdk. Coincidentally, a change to cros_chrome_sdk was chumped in around the time the failures started (~10am): https://crrev.com/c/1324169 CC'ing the owner and the reviewers for potential insight.
,
Nov 17
The cros_chrome_sdk change has to do with version parsing, so it's unlikely to be the cause of this failure.
,
Nov 17
The last successful betty-pre-cq run I see is http://cros-goldeneye/chromeos/healthmonitoring/buildDetails?buildbucketId=8929683539620951712 (started 09:09). The first failed run is http://cros-goldeneye/chromeos/healthmonitoring/buildDetails?buildbucketId=8929680619321426976 (started 09:52). I don't see anything obvious in this set: https://crosland.corp.google.com/log/11267.0.0..11268.0.0 There was a push-to-prod email at 08:48, although none of the changes there seem obviously related either. The CQ run at https://ci.chromium.org/p/chromeos/builders/luci.chromeos.general/Prod/b8929696151244362272 looks like it didn't start the CommitQueueHandleChanges stage until 10:15:22, so it's probably unrelated. The TastVMTest stage (which also involves SSH-ing to the DUT, although not via Autotest's code) is still passing, so whatever the problem is, it's likely to be in Autotest/chromite/etc. and not on the DUT itself.
,
Nov 17
Looks like this also broke update_kernel.sh what ever caused it. /update_kernel.sh --remote=X.X.X.X Could not initiate first contact with remote host Warning: Permanently added 'X.X.X.X' (ED25519) to the list of known hosts. mm_send_fd: file descriptor passing not supported mux_client_request_session: send fds failed mm_send_fd: file descriptor passing not supported mux_client_request_session: send fds failed
,
Nov 19
betty-pre-cq has been succeeding recently, so declaring this obsolete.
,
Nov 19
Does anyone have any idea why it was failing? If not, that's scary.
,
Nov 20
I now have two reports of this causing external developers being unable upload kernels and updating their chroot has not fixed their sdk. I verified that their ssh version matches mine (I am unaffected) which makes me think its something in the scripts. Is there a chance we can dive deeper into the cause here?
,
Nov 20
,
Nov 20
,
Nov 21
the error message: mm_send_fd: file descriptor passing not supported only shows up when this condition fails: #if defined(HAVE_SENDMSG) && (defined(HAVE_ACCRIGHTS_IN_MSGHDR) || defined(HAVE_CONTROL_IN_MSGHDR)) HAVE_SENDMSG should be defined to 1. this is a simple link-time test against the C library, and glibc has provided this for a long time. HAVE_ACCRIGHTS_IN_MSGHDR should be undefined. glibc has never provided "accrights" in the msghdr struct. HAVE_CONTROL_IN_MSGHDR should be defined to 1. glibc has provided "control" in the msghdr struct since before 1997. both of those struct tests are compile-time only tests, so they should be pretty reliable. both work on my local system when i rebuild openssh. when i grab the binpkgs from various builders, they look fine. $ gsutil cat gs://chromeos-prebuilt/board/amd64-generic/full-R72-11279.0.0-rc1/packages/net-misc/openssh-7.5_p1-r1.tbz2 | tar jxvf - ./usr/bin/ssh ./usr/bin/ssh $ strings -a usr/bin/ssh | grep 'file descriptor passing not supported' <no hit> repeat for: - gs://chromeos-prebuilt/board/betty/paladin-R72-11268.0.0-rc1/packages/net-misc/openssh-7.5_p1-r1.tbz2 - gs://chromeos-prebuilt/board/betty-arcnext/paladin-R72-11268.0.0-rc1/packages/net-misc/openssh-7.5_p1-r1.tbz2 - gs://chromeos-prebuilt/host/amd64/amd64-host/chroot-2018.11.18.222846/packages/net-misc/openssh-7.5_p1-r1.tbz2 so would help to track down a binpkg where this is failing so we can poke its logs to see why those configure time tests failed.
,
Nov 26
Is there evidence that this is a test infra issue? From ^ sounds like vapier suspects it is a product or build issue. -> vapier@ to find owner.
,
Nov 27
Issue 907222 sounds like it may be tracking the same thing on lakitu-gpu-paladin. VMTest is also failing consistently on betty-asan due to what looks like SSH connectivity issues: http://cros-goldeneye/chromeos/healthmonitoring/buildDetails?buildbucketId=8928784670996094720 http://cros-goldeneye/chromeos/healthmonitoring/buildDetails?buildbucketId=8928875268857858048 etc. INFO : QEMU binary: /b/swarming/wVWnpmt/ir/cache/cbuild/repository/chroot/usr/bin/qemu-system-x86_64 INFO : QEMU version: QEMU emulator version 3.0.0 INFO : Copyright (c) 2003-2017 Fabrice Bellard and the QEMU Project developers Persist requested. Use --ssh_port 9228 --ssh_private_key /b/swarming/wVWnpmt/ir/cache/cbuild/repository/src/build/images/betty/latest-cbuildbot/id_rsa --kvm_pid /tmp/kvm.9228 to re-connect to it. INFO : QEMU binary: /b/swarming/wVWnpmt/ir/cache/cbuild/repository/chroot/usr/bin/qemu-system-x86_64 INFO : QEMU version: QEMU emulator version 3.0.0 INFO : Copyright (c) 2003-2017 Fabrice Bellard and the QEMU Project developers Using a pre-created KVM instance specified by /tmp/kvm.9228. Could not initiate first contact with remote host Connection timed out during banner exchange Connection timed out during banner exchange
,
Nov 27
Also amd64-generic-asan, I think: http://cros-goldeneye/chromeos/healthmonitoring/buildDetails?buildbucketId=8928773346330954032 http://cros-goldeneye/chromeos/healthmonitoring/buildDetails?buildbucketId=8928784671212004560 (Although this is speculation on my part. I don't see the mux error in those logs, so I'm not sure that it's the same issue.)
,
Nov 27
,
Nov 27
Issue 907222 has been merged into this issue.
,
Nov 28
Assuming I'm following the bug chain correctly (from 907222, but I don't see the mux error in the logs) this has happened twice on betty-paladin today: https://ci.chromium.org/p/chromeos/builds/b8928606754178880560 https://ci.chromium.org/p/chromeos/builds/b8928593002928035904 Both times on "swarm-cros-444"
,
Nov 29
Continuing the trend betty-paladin has failed 8 times in a row all on the same swarm-cros-444: https://chrome-swarming.appspot.com/bot?id=swarm-cros-444&sort_stats=total%3Adesc The last time betty-paladin passed it was running on a different host, and this host was passing when it was building for veryron_minnie instead. There seems to be something about this specific combination of host and build.
,
Nov 29
Due to build affinity it will normally land on the same host unless that host becomes unavailable. Given the repeated failures, and no other clear signal, i'll go ahead and reimage the 444 device. -- Mike
,
Nov 29
This has been rebuilt. It was done in time for build affinity to assign the same device back to betty-paladin. This should remove the instance from being the issue. -- Mike
,
Dec 3
Note that in the past, this has been caused by SSH being broken in the ChromeOS SDK: https://bugs.chromium.org/p/chromium/issues/detail?id=865511#c47 From the last few comments this looks like just the chroot on a builder might have been broken? Either way, this is between build and ci.
,
Dec 3
,
Dec 3
Issue 911230 has been merged into this issue.
,
Dec 3
This is causing betty-arc64-paladin fails
,
Dec 4
Causing failures in betty-arcnext-paladin as well https://ci.chromium.org/p/chromeos/builders/luci.chromeos.general/CQ/b8928125837586220000
,
Dec 6
,
Dec 6
if i look at swarm-cros-395 history: https://chrome-swarming.appspot.com/bot?id=swarm-cros-395 we see: bb-8928207545579329984-chromeos-CQ 12/2/2018, 3:53:15 PM (Pacific Standard Time) 2h 7m 38s FAILURE bb-8928215113192082528-chromeos-CQ 12/2/2018, 1:53:40 PM (Pacific Standard Time) 1h 31m 55s KILLED bb-8928226812289562048-chromeos-CQ 12/2/2018, 10:47:27 AM (Pacific Standard Time) 1h 51m 47s SUCCESS https://chrome-swarming.appspot.com/task?id=4189104c4f10c010 the SUCCESS was auron_paine-paladin. it didn't update ssh in the SDK. https://chrome-swarming.appspot.com/task?id=4189ba8b49f00210 the KILLED was auron_paine-paladin again. it didn't make it to the VMTest phase and says "Task cancelled by user". it didn't reinstall the SDK (used the previous run), and the UpdateSDK phase didn't rebuild openssh. https://chrome-swarming.appspot.com/task?id=418a28a99b5cf810 the FAILURE was betty-arc64-paladin. but again, it didn't reinstall the SDK (used the previous run), and the UpdateSDK phase didn't rebuild openssh. its VMTest phase died because of the "mm_send_fd: file descriptor passing not supported". looking at its build logs, it fetched board prebuilts from these four sources: gs://chromeos-prebuilt/board/amd64-generic/full-R73-11331.0.0-rc1/packages/Packages gs://chromeos-prebuilt/board/amd64-generic/paladin-R73-11329.0.0-rc3/packages/Packages gs://chromeos-prebuilt/board/betty-arc64/paladin-R73-11329.0.0-rc3/packages/Packages gs://chromeos-prebuilt/board/betty-arcnext/chrome-R73-11327.0.0-rc1/packages/Packages but looking at the binpkgs from them, the ssh binary is built correctly. moving forward in the logs, that bot kept failing until the builder switched away to a diff config (nyan_kitty-full-compile-paladin): https://chrome-swarming.appspot.com/task?id=41924fb8552a9e10 so if ssh was bad, maybe it as poisoned even earlier, but it didn't cause failures because we didn't actually run ssh until we hit a bot that ran VMTests (e.g. betty). which might be why non-betty bots continue passing, but betty's keep falling over -- it's the only config anymore that runs VMTests. if we look at the history of swarm-cros-444, it's the same: non-betty configs switch, but when a betty config swaps onto the bot, vmtests fall down. then betty switches away, and the bot starts passing again. https://chrome-swarming.appspot.com/bot?id=swarm-cros-444 if we focus on betty-paladin, we see: FAIL: https://cros-goldeneye.corp.google.com/chromeos/healthmonitoring/buildDetails?buildbucketId=8928519910630912256 PASS: https://cros-goldeneye.corp.google.com/chromeos/healthmonitoring/buildDetails?buildbucketId=8928506222428076704 the pass happened because the config detected a problem with the buildroot, so it wiped it and started from scratch. it started failing as soon as it switched to swarm-cros-444, but was able to recover by nuking things. if we keep going back in the history of swarm-cros-444 to see when the SDK was last created, we have to go pretty far back (so i guess that means our system is working when the majority of the time it's a hot cache?). https://chrome-swarming.appspot.com/task?id=413969aad5accf10 that created a new SDK using version 2018.11.15.164630. if we fetch that version, we see it has a broken ssh. gsutil cp gs://chromiumos-sdk/cros-sdk-2018.11.15.164630.tar.xz ./ tar -Ipixz -xvf cros-sdk-2018.11.15.164630.tar.xz ./usr/bin/ssh strings ./usr/bin/ssh | grep file.*pass %s: file descriptor passing not supported ok, so we've got bots poisoned from weeks previous, but only certain configs (betty) tickle the bug. lets look at the SDK. the build before (cros-sdk-2018.11.14.214548) and after (cros-sdk-2018.11.16.103137) aren't broken. 2018.11.14.214548: https://cros-goldeneye.corp.google.com/chromeos/healthmonitoring/buildDetails?buildbucketId=8929816784469843488 2018.11.15.164630: https://cros-goldeneye.corp.google.com/chromeos/healthmonitoring/buildDetails?buildbucketId=8929745354078815440 2018.11.16.103137: https://cros-goldeneye.corp.google.com/chromeos/healthmonitoring/buildDetails?buildbucketId=8929678841154440608 so for the broken build, SetupBoard has this series of steps logged: https://luci-logdog.appspot.com/logs/chromeos/buildbucket/cr-buildbucket.appspot.com/8929745354078815440/+/steps/SetupBoard__amd64-host_/0/stdout Still building llvm-8.0_pre339409_p20180926-r5 (28m33.8s). Logs in /tmp/llvm-8.0_pre339409_p20180926-r5-pZMABG Started net-misc/openssh-7.5_p1-r1 (logged in /tmp/openssh-7.5_p1-r1-cAP1By) Pending 40/720, Building 4/4, [Time 20:18:28 | Elapsed 73m7.3s | Load 47.04 53.6 49.7] Completed sys-devel/llvm-8.0_pre339409_p20180926-r5 (in 30m27.9s) Still building ghc-8.0.2 (15m48.6s). Logs in /tmp/ghc-8.0.2-B7GCUi Started sys-libs/libcxxabi-7.0.0-r4 (logged in /tmp/libcxxabi-7.0.0-r4-6dUpQU) Started sys-devel/lld-8.0_pre339409-r1 (logged in /tmp/lld-8.0_pre339409-r1-0sLEv2) Started sys-devel/clang-8.0_pre339409_p20180926-r1 (logged in /tmp/clang-8.0_pre339409_p20180926-r1-mUeEBL) Started sys-devel/autofdo-0.18-r3 (logged in /tmp/autofdo-0.18-r3-ydhqrA) Completed sys-devel/clang-8.0_pre339409_p20180926-r1 (in 0m12.4s) Still building coreboot-sdk-0.0.1-r70 (53m7.4s). Logs in /tmp/coreboot-sdk-0.0.1-r70-cNPXrl Still building openssh-7.5_p1-r1 (2m6.4s). Logs in /tmp/openssh-7.5_p1-r1-cAP1By Completed sys-devel/autofdo-0.18-r3 (in 0m49.8s) Pending 36/720, Building 5/5, [Time 20:20:12 | Elapsed 74m51.1s | Load 32.04 46.31 47.47] Completed sys-devel/lld-8.0_pre339409-r1 (in 1m3.3s) Pending 36/720, Building 4/4, [Time 20:20:26 | Elapsed 75m4.6s | Load 29.98 45.18 47.08] Completed net-misc/openssh-7.5_p1-r1 (in 2m57.8s) for the working build, SetupBoard has this: https://cros-goldeneye.corp.google.com/chromeos/healthmonitoring/buildDetails?buildbucketId=8929816784469843488 Started net-misc/openssh-7.5_p1-r1 (logged in /tmp/openssh-7.5_p1-r1-Wm0LoV) Still building ghc-8.0.2 (15m15.7s). Logs in /tmp/ghc-8.0.2-MIzLXd Still building coreboot-sdk-0.0.1-r70 (55m17.6s). Logs in /tmp/coreboot-sdk-0.0.1-r70-YkmjRg Still building openssh-7.5_p1-r1 (2m0.1s). Logs in /tmp/openssh-7.5_p1-r1-Wm0LoV Still building llvm-8.0_pre339409_p20180926-r4 (31m49.2s). Logs in /tmp/llvm-8.0_pre339409_p20180926-r4-b66KgA Still building ghc-8.0.2 (17m15.8s). Logs in /tmp/ghc-8.0.2-MIzLXd Still building coreboot-sdk-0.0.1-r70 (57m17.7s). Logs in /tmp/coreboot-sdk-0.0.1-r70-YkmjRg Still building openssh-7.5_p1-r1 (4m0.2s). Logs in /tmp/openssh-7.5_p1-r1-Wm0LoV Still building llvm-8.0_pre339409_p20180926-r4 (33m49.3s). Logs in /tmp/llvm-8.0_pre339409_p20180926-r4-b66KgA Completed net-misc/openssh-7.5_p1-r1 (in 4m22.4s) https://luci-logdog.appspot.com/logs/chromeos/buildbucket/cr-buildbucket.appspot.com/8929678841154440608/+/steps/SetupBoard__amd64-host_/0/stdout Started net-misc/openssh-7.5_p1-r1 (logged in /tmp/openssh-7.5_p1-r1-ilAI9g) Still building coreboot-sdk-0.0.1-r70 (58m29.4s). Logs in /tmp/coreboot-sdk-0.0.1-r70-_kEwT8 Still building llvm-8.0_pre339409_p20180926-r5 (33m3.2s). Logs in /tmp/llvm-8.0_pre339409_p20180926-r5-qeDgjj Still building ghc-8.0.2 (17m8.9s). Logs in /tmp/ghc-8.0.2-WRGOhG Still building openssh-7.5_p1-r1 (2m0.1s). Logs in /tmp/openssh-7.5_p1-r1-ilAI9g Still building llvm-8.0_pre339409_p20180926-r5 (35m3.3s). Logs in /tmp/llvm-8.0_pre339409_p20180926-r5-qeDgjj Still building ghc-8.0.2 (19m9.0s). Logs in /tmp/ghc-8.0.2-WRGOhG Still building openssh-7.5_p1-r1 (4m0.2s). Logs in /tmp/openssh-7.5_p1-r1-ilAI9g Still building coreboot-sdk-0.0.1-r70 (61m29.5s). Logs in /tmp/coreboot-sdk-0.0.1-r70-_kEwT8 Completed net-misc/openssh-7.5_p1-r1 (in 4m32.8s) setup_board for the "amd64-host" board (which is a fake board which indicates bootstrap a new sdk) basically does: - emerge from source into / all the toolchain packages as well as virtual/target-sdk - create a new empty root under /build/amd64-host - emerge from (fresh/local) binary packages all the toolchain packages and virtual/target-sdk into /build/amd64-host - tar up /build/amd64-host and that's your so my guess is that openssh is running configure at the same time llvm/clang update, and that the llvm/clang update are not atomic, and that non-atomicity triggers at just the *wrong* time which breaks a few of the openssh configure tests which leads to this misbehavior. we've seen similar things with gcc & binutils updates ( issue 715788 ). so it's probably a bug in Gentoo in how it updates packages, but i'm not confident we can get them to spend too many cycles on this. it would probably be much easier if we disconnected the sdk build steps and managed the rebuilds/install of the toolchain packages by hand. - install sys-kernel/linux-headers & sys-libs/glibc - install sys-devel/binutils - install sys-libs/libcxx & sys-libs/libcxxabi & dev-libs/elfutils & dev-lang/go & sys-devel/lld & sys-devel/gcc (only because i think we switched the SDK to build using clang by this point, and we're not using lld to link in the SDK yet) - install sys-devel/llvm & sys-devel/clang - install system virtual/target-sdk world virtual/target-sdk-nobdeps
,
Dec 14
The following revision refers to this bug: https://chromium.googlesource.com/chromiumos/chromite/+/86cdce6fb00ded79d666c191a21c6520be29f449 commit 86cdce6fb00ded79d666c191a21c6520be29f449 Author: Mike Frysinger <vapier@chromium.org> Date: Fri Dec 14 03:28:26 2018 cbuildbot: SDKUprevStage: log the new SDK version as a step This makes it easier to review SDK bot logs to see what SDK version it ultimately produced. BUG= chromium:906289 TEST=precq passes Change-Id: Ie1b8fdb4f09d645d6e99134d31a8408da238bcd7 Reviewed-on: https://chromium-review.googlesource.com/1365310 Commit-Ready: ChromeOS CL Exonerator Bot <chromiumos-cl-exonerator@appspot.gserviceaccount.com> Tested-by: Mike Frysinger <vapier@chromium.org> Reviewed-by: Alex Klein <saklein@chromium.org> [modify] https://crrev.com/86cdce6fb00ded79d666c191a21c6520be29f449/cbuildbot/stages/sdk_stages.py
,
Dec 20
https://chromium-review.googlesource.com/1387717 might take care of this
,
Dec 21
The following revision refers to this bug: https://chromium.googlesource.com/chromiumos/platform/crosutils/+/d919057cc9330fbae449e53ac0ff622834b35728 commit d919057cc9330fbae449e53ac0ff622834b35728 Author: Mike Frysinger <vapier@chromium.org> Date: Fri Dec 21 21:23:35 2018 build_sdk_board: build the toolchain packages by themselves By building the toolchain in parallel with other packages which rely on the toolchain, we might introduce random breakage when a file is updated in place and temporarily causes it to fail to run (e.g. if we update the assembler while it's running, it might fail). We can mitigate this by building just the toolchain packages by themselves followed by everything else. BUG= chromium:906289 TEST=sdk bot passes Change-Id: I1c8fcf74f232b66de52def84d0ce2f05b0bc16b0 Reviewed-on: https://chromium-review.googlesource.com/c/1387717 Reviewed-by: Jason Clinton <jclinton@chromium.org> Tested-by: Mike Frysinger <vapier@chromium.org> [modify] https://crrev.com/d919057cc9330fbae449e53ac0ff622834b35728/build_sdk_board
,
Dec 21
,
Dec 23
The following revision refers to this bug: https://chromium.googlesource.com/chromiumos/platform/crosutils/+/978efbfc438a7abd090a3b4d9ed3f3360578238b commit 978efbfc438a7abd090a3b4d9ed3f3360578238b Author: Mike Frysinger <vapier@chromium.org> Date: Sun Dec 23 22:46:14 2018 make_chroot: support pixz for decompression For a speed up on decompression, use pixz when available. We need to tweak the decompression command slightly to workaround misbehavior -- it will try to decompress the file in place instead of stdout. BUG= chromium:906289 TEST=creating new chroot still works Change-Id: I25b1a867f4663dd6faea97ab849dc19906eecf12 Reviewed-on: https://chromium-review.googlesource.com/1387714 Commit-Ready: Mike Frysinger <vapier@chromium.org> Tested-by: Mike Frysinger <vapier@chromium.org> Reviewed-by: LaMont Jones <lamontjones@chromium.org> [modify] https://crrev.com/978efbfc438a7abd090a3b4d9ed3f3360578238b/sdk_lib/make_chroot.sh
,
Dec 23
The following revision refers to this bug: https://chromium.googlesource.com/chromiumos/platform/crosutils/+/7d88be93f1d4005cbdc3ccdf8574942d74bf31a3 commit 7d88be93f1d4005cbdc3ccdf8574942d74bf31a3 Author: Mike Frysinger <vapier@chromium.org> Date: Sun Dec 23 22:46:11 2018 make_chroot: improve early logging a bit Add some basic command logging to every command we run inside the chroot. This helps debug early bootstrap failures without adding too much noise to the process. BUG= chromium:906289 TEST=sdk bootstrap still works Change-Id: Ibffeca33c2a42806255335ce89f778225a99db9e Reviewed-on: https://chromium-review.googlesource.com/1387716 Commit-Ready: Mike Frysinger <vapier@chromium.org> Tested-by: Mike Frysinger <vapier@chromium.org> Reviewed-by: Alex Klein <saklein@chromium.org> [modify] https://crrev.com/7d88be93f1d4005cbdc3ccdf8574942d74bf31a3/sdk_lib/make_chroot.sh |
|||||||||||||||||||||||||
►
Sign in to add a comment |
|||||||||||||||||||||||||
Comment 1 by benchan@chromium.org
, Nov 17