|
|
Chrome heap overflow in CertificateResourceHandler | ||||
| Project Member Reported by markbrand@google.com, May 4 2015 | Back to list | ||||
There’s an integer overflow/truncation issue in the handling of files of certificate MIME-types (application/x-x509-user-cert and similar) of large size (>4gig), resulting in heap corruption/RCE in the chrome browser process.
The vulnerable code is in content::CertificateResourceHandler, which handles requests for these MIME-types.
The CertificateResourceHandler tracks the size of the file in an instance variable; content_length_ of type size_t; the files are retrieved in chunks of size <= 32768 and stored in a vector for later reassembly:
bool CertificateResourceHandler::OnReadCompleted(int bytes_read, bool* defer) {
if (!bytes_read)
return true;
// We have more data to read.
DCHECK(read_buffer_.get());
content_length_ += bytes_read; <---- bytes_read <= 32768
// Release the ownership of the buffer, and store a reference
// to it. A new one will be allocated in OnWillRead().
scoped_refptr<net::IOBuffer> buffer;
read_buffer_.swap(buffer);
// TODO(gauravsh): Should this be handled by a separate thread?
buffer_.push_back(std::make_pair(buffer, bytes_read));
return true;
}
Once all of the response data has been retrieved, the handler attempts to reassemble these chunks; using a freshly allocated IOBuffer of sufficient size. Unfortunately, IOBuffer takes an int for a size parameter; so integer overflow occurs on truncation for sufficiently large files. As we then reassemble the response data into the resulting buffer, this results in heap corruption beyond the end of the newly allocated IOBuffer.
void CertificateResourceHandler::AssembleResource() {
// 0-length IOBuffers are not allowed.
if (content_length_ == 0) {
resource_buffer_ = NULL;
return;
}
// Create the new buffer.
resource_buffer_ = new net::IOBuffer(content_length_); <---- truncation to ‘int’ occurs here
// Copy the data into it.
size_t bytes_copied = 0;
for (size_t i = 0; i < buffer_.size(); ++i) {
net::IOBuffer* data = buffer_[i].first.get();
size_t data_len = buffer_[i].second;
DCHECK(data != NULL);
DCHECK_LE(bytes_copied + data_len, content_length_);
memcpy(resource_buffer_->data() + bytes_copied, data->data(), data_len); <---- we will SEGV here
bytes_copied += data_len;
}
DCHECK_EQ(content_length_, bytes_copied);
}
See report id 99b9ae555621cf6e for an example crash; and the attached proof-of-concept code which triggers the issue has been tested on the latest stable linux x64 build.
See https://code.google.com/p/chromium/issues/detail?id=484270 for the chromium issue.
This bug is subject to a 90 day disclosure deadline. If 90 days elapse
without a broadly available patch, then the bug report will automatically
become visible to the public.
Comment 1
by
cevans@google.com,
May 4 2015
,
May 4 2015
Gzip'ed data weighs in at ~4meg over-the-wire. Severity changed to critical since this is a remote with no user interaction, and it would result in RCE in an unsandboxed process, so it's a single bug chain for 64-bit chrome.
,
Jul 7 2015
Derestricting as have confirmed fixed in stable release under CVE-2015-1265.
,
Aug 12 2015
,
Aug 12 2015
|
|||||
| ► Sign in to add a comment | |||||