Push to Prod failed at repo sync with "insufficient permission" |
||
Issue description
Example on chromeos-test@chromeos-server120
chromeos-test@chromeos-server120:/usr/local/.repo/projects/autotest.git/objects$ /usr/local/autotest/site_utils/deploy_server_local.py
Checking tree status:
Tree status: clean
Updating Repo.
remote: Finding sources: 100% (97/97) ources: 1% (1/97)
remote: Total 97 (delta 54), reused 96 (delta 54)
error: insufficient permission for adding an object to repository database /usr/local/.repo/projects/autotest.git/objects
fatal: failed to write object
fatal: unpack-objects failed
remote: Finding sources: 100% (97/97)
remote: Total 97 (delta 54), reused 96 (delta 54)
error: insufficient permission for adding an object to repository database /usr/local/.repo/projects/autotest.git/objects
fatal: failed to write object
fatal: unpack-objects failed
error: Cannot fetch chromiumos/third_party/autotest
error: Exited sync due to fetch errors
Traceback (most recent call last):
File "/usr/local/autotest/site_utils/deploy_server_local.py", line 519, in <module>
sys.exit(main(sys.argv[1:]))
File "/usr/local/autotest/site_utils/deploy_server_local.py", line 500, in main
repo_sync(behaviors.update_push_servers)
File "/usr/local/autotest/site_utils/deploy_server_local.py", line 167, in repo_sync
subprocess.check_output(['repo', 'sync'])
File "/usr/lib/python2.7/subprocess.py", line 573, in check_output
raise CalledProcessError(retcode, cmd, output=output)
subprocess.CalledProcessError: Command '['repo', 'sync']' returned non-zero exit status 1
,
Oct 9 2017
Is this on all servers, or a limited subset?
,
Oct 9 2017
I'm wondering if I screwed up and ran sync somewhere as root while trying to fix things last week.
,
Oct 9 2017
I run "sudo chown -R chromeos-test:eng *" and changed the file owner to chromeos-test, then deploy_server_local.py passed. There're at least 5 other shards with the same problem chromeos-server104.mtv.corp.google.com chromeos-server33.cbf.corp.google.com chromeos-server37.cbf.corp.google.com chromeos-server118.mtv.corp.google.com chromeos-server120.mtv.corp.google.com
,
Oct 9 2017
I touched about 9-10 servers by hand, for a variety of reasons. But since each one was different, it seems weird that I would have made the same mistake each time.
,
Oct 9 2017
I suggest fixing manually, when we watch to see if it happens again. If it doesn't, blame me and "wont fix".
,
Oct 9 2017
On some shards, not all. Looks like they're modified on Sep. 3. this is weird as we had a lot push-to-prod after that. chromeos-test@chromeos-server37:/usr/local/.repo/projects/autotest.git/objects$ stat 87 File: ‘87’ Size: 4096 Blocks: 8 IO Block: 4096 directory Device: ca01h/51713d Inode: 1048913 Links: 2 Access: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2017-10-06 10:56:25.891346037 -0700 Modify: 2017-09-03 10:28:26.238355912 -0700 Change: 2017-09-03 10:28:26.238355912 -0700 Birth: -
,
Oct 11 2017
the objects owned by root should be caused by manual sync (See #3)
,
Oct 11 2017
Okay, sorry to have left that bomb behind for you to find. |
||
►
Sign in to add a comment |
||
Comment 1 by nxia@chromium.org
, Oct 9 2017