The proxy parser function strips off trailing slashes off the proxy name
which could lead to a mistaken zero length proxy name which would be
treated as no proxy at all by subsequent functions!
This is now detected and an error is returned. Verified by the new test
1329.
Reported by: Chandrakant Bagul
Bug: http://curl.haxx.se/mail/lib-2012-02/0000.html
This new option tells curl to not work around a security flaw in the
SSL3 and TLS1.0 protocols. It uses the new libcurl option
CURLOPT_SSL_OPTIONS with the CURLSSLOPT_ALLOW_BEAST bit set.
Allow an appliction to set libcurl specific SSL options. The first and
only options supported right now is CURLSSLOPT_ALLOW_BEAST.
It will make libcurl to disable any work-arounds the underlying SSL
library may have to address a known security flaw in the SSL3 and TLS1.0
protocol versions.
This is a reaction to us unconditionally removing that behavior after
this security advisory:
http://curl.haxx.se/docs/adv_20120124B.html
... it did however cause a lot of programs to fail because of old
servers not liking this work-around. Now programs can opt to decrease
the security in order to interoperate with old servers better.
Use the new library CURLOPT_TCP_KEEPALIVE rather than disabling this via
the sockopt callback. If --keepalive-time is used, apply the value to
CURLOPT_TCP_KEEPIDLE and CURLOPT_TCP_KEEPINTVL.
This adds three new options to control the behavior of TCP keepalives:
- CURLOPT_TCP_KEEPALIVE: enable/disable probes
- CURLOPT_TCP_KEEPIDLE: idle time before sending first probe
- CURLOPT_TCP_KEEPINTVL: delay between successive probes
While not all operating systems support the TCP_KEEPIDLE and
TCP_KEEPINTVL knobs, the library will still allow these options to be
set by clients, silently ignoring the values.
When CURLOPT_REFERER has been used, curl_easy_reset() did not properly
clear it.
Verified with the new test 598
Bug: http://curl.haxx.se/bug/view.cgi?id=3481551
Reported by: Michael Day
We want to continue to the next URL to try even on failures returned
from libcurl. This makes -f with ranges still get subsequent URLs even
if occasional ones return error. This was a regression as it used to
work and broke in the 7.23.0 release.
Added test case 1328 to verify the fix.
Bug: http://curl.haxx.se/bug/view.cgi?id=3481223
Reported by: Juan Barreto
When the target host was given as a IPv6 numerical address, it was not
properly put within square brackets for the Host: header in the CONNECT
request. The "normal" request did fine.
Reported by: "zooloo"
Bug: http://curl.haxx.se/bug/view.cgi?id=3482093
When support for nettle was added in 64f328c787, I overlooked
the fact that AC_CHECK_LIB doesn't add the tested lib to LIBS
if the check succeeded, if a custom success code block was present.
(The previous version of the check had an empty block for
successful checks, adding the lib to LIBS implicitly.)
Therefore, explicitly add either nettle or gcrypt to LIBS, after
deciding which one to use. Even if they can be linked in
transitively, it is safer to actually link explicitly to them.
This fixes building with gnutls with linkers that don't allow
linking transitively, such as for windows.
When connecting to a domain with multiple IP addresses, allow different,
decreasing connection timeout values. This should guarantee some
connections attempts with sufficiently long timeouts, while still
providing fallback.
With advice from Nikos Mavrogiannopoulos, changed the priority string to
add "actual priorities" and favour ARCFOUR. This makes libcurl work
better when enforcing SSLv3 with GnuTLS. Both in the sense that the
libmicrohttpd test is now working again but also that it mitigates a
weakness in the older SSL/TLS protocols.
Bug: http://curl.haxx.se/mail/lib-2012-01/0225.html
Reported by: Christian Grothoff
Protocols (IMAP, POP3 and SMTP) that use the path part of a URL in a
decoded manner now use the new Curl_urldecode() function to reject URLs
with embedded control codes (anything that is or decodes to a byte value
less than 32).
URLs containing such codes could easily otherwise be used to do harm and
allow users to do unintended actions with otherwise innocent tools and
applications. Like for example using a URL like
pop3://pop3.example.com/1%0d%0aDELE%201 when the app wants a URL to get
a mail and instead this would delete one.
This flaw is considered a security vulnerability: CVE-2012-0036
Security advisory at: http://curl.haxx.se/docs/adv_20120124.html
Reported by: Dan Fandrich
OpenSSL added a work-around for a SSL 3.0/TLS 1.0 CBC vulnerability
(http://www.openssl.org/~bodo/tls-cbc.txt). In 0.9.6e they added a bit
to SSL_OP_ALL that _disables_ that work-around despite the fact that
SSL_OP_ALL is documented to do "rather harmless" workarounds.
The libcurl code uses the SSL_OP_ALL define and thus logically always
disables the OpenSSL fix.
In order to keep the secure work-around workding, the
SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS bit must not be set and this change
makes sure of this.
Reported by: product-security at Apple
Using a URL with embedded user name and password didn't work if the host
was given as a numerical IPv6 string, like ftp://user:password@[::1]/
Reported by: Brandon Wang
Bug: http://curl.haxx.se/mail/archive-2012-01/0047.html
As is pointed out in this bug report, there can indeed be situation
where --stderr has a point even when the "real" stderr can be
redirected. Remove the superfluous and wrong comment.
bug: http://curl.haxx.se/bug/view.cgi?id=3476020
SSL_OP_NETSCAPE_REUSE_CIPHER_CHANGE_BUG option enabling allowed successfull
interoperability with web server Netscape Enterprise Server 2.0.1 released
back in 1996 more than 15 years ago.
Due to CVE-2010-4180, option SSL_OP_NETSCAPE_REUSE_CIPHER_CHANGE_BUG has
become ineffective as of OpenSSL 0.9.8q and 1.0.0c. In order to mitigate
CVE-2010-4180 when using previous OpenSSL versions we no longer enable
this option regardless of OpenSSL version and SSL_OP_ALL definition.
Allows tests from the libtest subdir to generate log traces
similar to those of curl with --tracetime and --trace-ascii
options but with output going to stderr.