Using curl to download a file and retry when writing to stdout does not work reliably.
The included test program creates a simple HTTP server which returns static content. If a Range header is specified, the request range is returned. Otherwise, every second request it adds a four second delay in the middle of the transfer. Using curl and wget with an aggressive timeout resulted in the following behaviour:
# Works and resumes file at the point where the previous transfer was aborted:
# wget --progress=dot:giga -S --retry-connrefused -O - http://localhost:8000 -c --read-timeout 2 >/tmp/f
# wget --progress=dot:giga -S --retry-connrefused -O - http://localhost:8000 --read-timeout 2 >/tmp/f
# Works but does not resume file and ends up retransfering the first part of the file:
# curl --retry 2 --retry-delay 2 --max-time 2 http://localhost:8000 -o /dev/stdout > /tmp/f
# Does not work, report success but results in double output of the first part of the file:
# curl --retry 2 --retry-delay 2 --max-time 2 http://localhost:8000 > /tmp/f
# Does not work, fails with "failed to truncate, exiting" failure and results in just the first part of the file:
# curl --retry 2 --retry-delay 2 --max-time 2 http://localhost:8000 -o /dev/stdout | dd
The quick provisioning changes (issue 779870) rely on downloading content from devservers and piping the content to gzip before being written directly to raw disk partitions. curl is unsuitable for doing internal retries due to these issues and will need to be externally wrapped upon any failure.
Ideally, curl should:
- write the exact contents of the file to stdout regardless if -o /dev/stdout is specified
- work if the contents are being piped to another process
- resume the transfer at the point things failed
(Excuse the hackiness of the test program...)
|
Deleted:
server.py
2.7 KB
|
Comment 1 by davidri...@chromium.org
, Nov 7 2017