From f49df54a36a39995be32782154f3ca2692f17ac4 Mon Sep 17 00:00:00 2001 From: Daniel Stenberg Date: Tue, 6 Dec 2005 23:05:51 +0000 Subject: [PATCH] 7.15.1 with the now to be announced security flaw fixed --- CHANGES | 33 +++++++++++++++++++++++++++++++++ RELEASE-NOTES | 4 +++- lib/url.c | 10 ++++++++-- 3 files changed, 44 insertions(+), 3 deletions(-) diff --git a/CHANGES b/CHANGES index b1cad112b..5a8496d6f 100644 --- a/CHANGES +++ b/CHANGES @@ -8,6 +8,39 @@ +Version 7.15.1 (7 December 2005) + +Daniel (6 December 2005) +- Full text here: http://curl.haxx.se/docs/adv_20051207.html Pointed out by + Stefan Esser. + + VULNERABILITY + + libcurl's URL parser function can overflow a malloced buffer in two ways, if + given a too long URL. + + These overflows happen if you + + 1 - pass in a URL with no protocol (like "http://") prefix, using no slash + and the string is 256 bytes or longer. This leads to a single zero byte + overflow of the malloced buffer. + + 2 - pass in a URL with only a question mark as separator (no slash) between + the host and the query part of the URL. This leads to a single zero byte + overflow of the malloced buffer. + + Both overflows can be made with the same input string, leading to two single + zero byte overwrites. + + The affected flaw cannot be triggered by a redirect, but the long URL must + be passed in "directly" to libcurl. It makes this a "local" problem. Of + course, lots of programs may still pass in user-provided URLs to libcurl + without doing much syntax checking of their own, allowing a user to exploit + this vulnerability. + + There is no known exploit at the time of this writing. + + Daniel (2 December 2005) - Jamie Newton pointed out that libcurl's file:// code would close() a zero file descriptor if given a non-existing file. diff --git a/RELEASE-NOTES b/RELEASE-NOTES index f84c2c377..cd50aa110 100644 --- a/RELEASE-NOTES +++ b/RELEASE-NOTES @@ -19,6 +19,7 @@ This release includes the following changes: This release includes the following bugfixes: + o buffer overflow problem: http://curl.haxx.se/docs/adv_20051207.html o using file:// on non-existing files are properly handled o builds fine on DJGPP o CURLOPT_ERRORBUFFER is now always filled in on errors @@ -54,6 +55,7 @@ advice from friends like these: Dave Dribin, Bradford Bruce, Temprimus, Ofer, Dima Barsky, Amol Pattekar, Jaz Fresh, tommink[at]post.pl, Gisle Vanem, Nis Jorgensen, Vilmos Nebehaj, Dmitry Bartsevich, David Lang, Eugene Kotlyarov, Jan Kunder, Yang Tse, Quagmire, - Albert Chin, David Shaw, Doug Kaufman, Bryan Henderson, Jamie Newton + Albert Chin, David Shaw, Doug Kaufman, Bryan Henderson, Jamie Newton, Stefan + Esser Thanks! (and sorry if I forgot to mention someone) diff --git a/lib/url.c b/lib/url.c index bc6033a36..3715b10ca 100644 --- a/lib/url.c +++ b/lib/url.c @@ -2378,12 +2378,18 @@ static CURLcode CreateConnection(struct SessionHandle *data, if(urllen < LEAST_PATH_ALLOC) urllen=LEAST_PATH_ALLOC; - conn->pathbuffer=(char *)malloc(urllen); + /* + * We malloc() the buffers below urllen+2 to make room for to possibilities: + * 1 - an extra terminating zero + * 2 - an extra slash (in case a syntax like "www.host.com?moo" is used) + */ + + conn->pathbuffer=(char *)malloc(urllen+2); if(NULL == conn->pathbuffer) return CURLE_OUT_OF_MEMORY; /* really bad error */ conn->path = conn->pathbuffer; - conn->host.rawalloc=(char *)malloc(urllen); + conn->host.rawalloc=(char *)malloc(urllen+2); if(NULL == conn->host.rawalloc) return CURLE_OUT_OF_MEMORY; conn->host.name = conn->host.rawalloc;