Version number is removed in order to make this info consistent with
how we do it with other MS and Linux system libraries for which we don't
provide this info.
Identifier changed from 'WinSSPI' to 'schannel' given that this is the
actual provider of the SSL/TLS support. libcurl can still be built with
SSPI and without SCHANNEL support.
Previously it wasn't possible to connect to POP3 and not specify the
user name as a CURLE_ACCESS_DENIED error would be returned. This error
occurred because USER would be sent to the server with a blank user name
if no mailbox user was specified as the server would reply with -ERR.
This wasn't a problem prior to the 7.26.0 release but with the
introduction of custom commands the user and/or application developer
might want to issue a CAPA command without having to log in as a
specific mailbox user.
Additionally this fix won't send the newly introduced AUTH command if no
user name is specified.
NSS_InitContext() was introduced in NSS 3.12.5 and helps to prevent
collisions on NSS initialization/shutdown with other libraries.
Bug: https://bugzilla.redhat.com/738456
The commit 9dd85bc unintentionally changed the way we compute the time
spent waiting for 100-continue. In particular, when using a SSL client
certificate, the time spent by SSL handshake was included and could
cause the CURL_TIMEOUT_EXPECT_100 timeout to be mistakenly fired up.
Bug: https://bugzilla.redhat.com/767490
Reported by: Mamoru Tasaka
One new feature, one bug fix. Introduced references in this file for
mentioned issues after this discussion:
http://curl.haxx.se/mail/lib-2011-12/0187.html
The plan is to let the references get moved over to the changes.html
file at release-time
Feature string literal NTLM_SSO renamed to NTLM_WB.
Preprocessor symbol USE_NTLM_SSO renamed to WINBIND_NTLM_AUTH_ENABLED.
curl's 'long' option 'ntlm-sso' renamed to 'ntlm-wb'.
Fix some comments to make clear that this is actually a NTLM delegation.
When closing a connection, the speedchecker's timestamp is now deleted
so that it cannot accidentally be used by a fresh connection on the same
handle when examining the transfer speed.
Bug: https://bugzilla.redhat.com/679709
In case a client certificate is used, invalidate SSL session cache
at the end of a session. This forces NSS to ask for a new client
certificate when connecting second time to the same host.
Bug: https://bugzilla.redhat.com/689031
When NSS-powered libcurl connected to a SSL server with
CURLOPT_SSL_VERIFYPEER equal to zero, NSS remembered that the peer
certificate was accepted by libcurl and did not ask the second time when
connecting to the same server with CURLOPT_SSL_VERIFYPEER equal to one.
This patch turns off the SSL session cache for the particular SSL socket
if peer verification is disabled. In order to avoid any performance
impact, the peer verification is completely skipped in that case, which
makes it even faster than before.
Bug: https://bugzilla.redhat.com/678580
Introducing a few CURL_SOCKOPT* defines for conveniance. The new
CURL_SOCKOPT_ALREADY_CONNECTED signals to libcurl that the socket is to
be treated as already connected and thus it will skip the connect()
call.
... and update the curl.1 and curl_easy_setopt.3 man pages such that
they do not suggest to use an OpenSSL utility if curl is not built
against OpenSSL.
Bug: https://bugzilla.redhat.com/669702
On Windows, translate WSAGetLastError() to errno values as GNU
TLS does it internally, too. This is necessary because send() and
recv() on Windows don't set errno when they fail but GNU TLS
expects a proper errno value.
Bug: http://curl.haxx.se/bug/view.cgi?id=3110991
When curl calls a function from that library then it needs to
explicitly link to the library instead of piggybacking on
libcurl's own dependency. Without this, GNU ld with the
--no-add-needed flag fails when linking (which Fedora now does
by default).
Reported by: Quanah Gibson-Mount
Bug: http://curl.haxx.se/mail/lib-2010-09/0085.html
When getting multiple URLs, curl didn't properly reset the byte counter
after a successful transfer so if the subsequent transfer failed it
would wrongly use the previous byte counter and behave badly (segfault)
because of that. The code assumes that the byte counter and the 'stream'
pointer is well in synch.
Reported by: Jon Sargeant
Bug: http://curl.haxx.se/bug/view.cgi?id=3028241
When configured with '--without-ssl --with-nss', NTLM authentication
now uses NSS crypto library for MD5 and DES. For MD4 we have a local
implementation in that case. More details are available at
https://bugzilla.redhat.com/603783
In order to get it working, curl_global_init() must be called with
CURL_GLOBAL_SSL or CURL_GLOBAL_ALL. That's necessary because NSS needs
to be initialized globally and we do so only when the NSS library is
actually required by protocol. The mentioned call of curl_global_init()
is responsible for creating of the initialization mutex.
There was also slightly changed the NSS initialization scenario, in
particular, loading of the NSS PEM module. It used to be loaded always
right after the NSS library was initialized. Now the library is
initialized as soon as any SSL or NTLM is required, while the PEM module
is prevented from being loaded until the SSL is actually required.
curl didn't properly handle escaping characters in a URL with the use of
backslash. It did an attempt, but that failed as reported in bug
3022551. The described example was using the URL
"http://example.com?{AB,C\,D}".
I've now removed the special-handling of letters following the backslash
and I also removed the bad extra check that triggered this particular
bug.
Bug: http://curl.haxx.se/bug/view.cgi?id=3022551
Reported by: Jon Sargeant
Was seeing spurious SSL connection aborts using libcurl and
OpenSSL. I tracked it down to uncleared error state on the
OpenSSL error stack - patch attached deals with that.
Rough idea of problem:
Code that uses libcurl calls some library that uses OpenSSL but
don't clear the OpenSSL error stack after an error.
ssluse.c calls SSL_read which eventually gets an EWOULDBLOCK from
the OS. Returns -1 to indicate an error
ssluse.c calls SSL_get_error. First thing, SSL_get_error calls
ERR_get_error to check the OpenSSL error stack, finds an old
error and returns SSL_ERROR_SSL instead of SSL_ERROR_WANT_READ or
SSL_ERROR_WANT_WRITE.
ssluse.c returns an error and aborts the connection
Solution:
Clear the openssl error stack before calling SSL_* operation if
we're going to call SSL_get_error afterwards.
Notes:
This is much more likely to happen with multi because it's easier
to intersperse other calls to the OpenSSL library in the same
thread.
Enable OpenLDAP support for cygwin builds. This support was disabled back
in 2008 due to incompatibilities between OpenSSL and OpenLDAP headers.
cygwin's OpenSSL 0.9.8l and OpenLDAP 2.3.43 versions on cygwin 1.5.25
allow building an OpenLDAP enabled libcurl supporting back to Windows 95.
Remove non-functional CURL_LDAP_HYBRID code and references.
Jason McDonald posted bug report #3006786 when he found that the
SFTP code didn't timeout properly in several places in the code
even if a timeout was set properly.
Based on his suggested patch, I wrote a different implementation
that I think addressed the issue better and also uses the connect
timeout for the initial part of the SSH/SFTP done during the
"protocol connect" phase.
(http://curl.haxx.se/bug/view.cgi?id=3006786)
Igor Novoseltsev reported a problem with the multi socket API and
using timeouts and timers. It boiled down to a problem with
libcurl's use of GetTickCount() interally to figure out the
current time, while Igor's own application code used another
function call.
It made his app call the socket API timeout function a bit
_before_ libcurl would consider the timeout to trigger, and that
could easily lead to timeouts or stalls in the app. It seems
GetTickCount() in general often has no better resolution than
16ms and switching to the alternative function
QueryPerformanceCounter has its share of problems:
http://www.virtualdub.org/blog/pivot/entry.php?id=106
We address this problem by simply having libcurl treat timers
that already has occured or will occur within 40ms subject for
treatment. I'm confident that there are other implementations and
operating systems with similarly in accurate timer functions so
it makes sense to have applied generically and I don't believe we
sacrifice much by adding a 40ms inaccuracy on these timeouts.
makes the LDAP code much cleaner, nicer and in general being a
better libcurl citizen. If a new enough OpenLDAP version is
detect, the new and shiny lib/openldap.c code is then used
instead of the old cruft
Code by Howard, minor cleanups by Daniel.
In a normal expression, doing [unsigned short] + 1 will not wrap
at 16 bits so the comparisons and outputs were done wrong. I
added a macro do make sure it gets done right.
Douglas Kilpatrick filed bug report #3004787 about it:
http://curl.haxx.se/bug/view.cgi?id=3004787
Eric Mertens posted bug report #3003005 pointing out that the
libcurl TFTP code was not sending the timeout option properly to
the server, and suggested a fix.
(http://curl.haxx.se/bug/view.cgi?id=3003005)
John-Mark Bell filed bug #3000052 that identified a problem (with
an associated patch) with the OpenSSL handshake state machine
when the multi interface is used:
Performing an https request using a curl multi handle and using
select or epoll to wait for events results in a hang. It appears
that the cause is the fix for bug #2958179, which makes
ossl_connect_common unconditionally return from the step 2 loop
when fetching from a multi handle.
When ossl_connect_step2 has completed, it updates
connssl->connecting_state to ssl_connect_3. ossl_connect_common
will then return to the caller, as a multi handle is in
use. Eventually, the client code will call curl_multi_fdset to
obtain an updated fdset to select or epoll on. For https
requests, curl_multi_fdset will cause https_getsock to be called.
https_getsock will only return a socket handle if the
connecting_state is ssl_connect_2_reading or
ssl_connect_2_writing. Therefore, the client will never obtain a
valid fdset, and thus not drive the multi handle, resulting in a
hang.
(http://curl.haxx.se/bug/view.cgi?id=3000052)
Sebastian V reported bug #3000056 identifying a problem with
redirect following. It showed that when curl followed redirects
it didn't properly ignore the response body of the 30X response
if that response was using compressed Content-Encoding!
(http://curl.haxx.se/bug/view.cgi?id=3000056)
Dirk Manske reported a regression. When connecting with the multi
interface, there were situations where libcurl wouldn't store
connect time correctly as it used to (and is documented to) do.
Using his fine sample program we could repeat it, and I wrote up
test case 573 using that code. The problem does not easily show
itself using the local test suite though.
The fix, also as suggested by Dirk, is a bit on the ugly side as
it adds yet another call to Curl_verboseconnect() and setting the
TIMER_CONNECT time. That situation is subject for some closer
inspection in the future.
Prefixing the FTP quote commands with an asterisk really only
worked for the postquote actions. This is now fixed and test case
227 has been extended to verify.
Matt Wixson found and fixed a bug in the SCP/SFTP area where the
code treated a 0 return code from libssh2 to be the same as
EAGAIN while in reality it isn't. The problem caused a hang in
SFTP transfers from a MessageWay server.
Ben Greear brought a patch that from now on allows all protocols
to specify name and user within the URL, in the same manner HTTP
and FTP have been allowed to in the past - although far from all
of the libcurl supported protocols actually have that feature in
their URL definition spec.
Bob Richmond: There's an annoying situation where libcurl will
read new HTTP response data from a socket, then check if it's a
timeout if one is set. If the last packet received constitutes
the end of the response body, libcurl still treats it as a
timeout condition and reports a message like:
"Operation timed out after 3000 milliseconds with 876 out of 876
bytes received"
It should only a timeout if the timer lapsed and we DIDN'T
receive the end of the response body yet.
Christopher Conroy fixed a problem with RTSP and GET_PARAMETER
reported to us by Massimo Callegari. There's a new test case 572
that verifies this now.
Kenny To filed the bug report #2963679 with patch to fix a
problem he experienced with doing multi interface HTTP POST over
a proxy using PROXYTUNNEL. He found a case where it would connect
fine but bits.tcpconnect was not set correct so libcurl didn't
work properly.
(http://curl.haxx.se/bug/view.cgi?id=2963679)
Akos Pasztory filed debian bug report #572276http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=572276
mentioning a problem with a resource that returns chunked-encoded
_and_ with a Content-Length and libcurl failed to properly ignore
the latter information.
Hauke Duden provided an example program that made the multi
interface crash. His example simply used the multi interface and
did first one FTP transfer and after completion it used a second
easy handle and did another FTP transfer on the same FTP server.
This triggered a bug in the "delayed easy handle kill" system
that curl uses: when an FTP connection is left alive it must keep
an easy handle around internally - only for the purpose of having
an easy handle when it later disconnects it. The code assumed
that when the easy handle was removed and an internal reference
was made, that version could be killed later on when a new easy
handle came using the same connection. This was wrong as Hauke's
example showed that the removed handle wasn't killed for real
until later. This caused a double close attempt => segfault.
The problem mentioned on Dec 10 2009
(http://curl.haxx.se/bug/view.cgi?id=2905220) was only partially fixed.
Partially because an easy handle can be associated with many connections in
the cache (e.g. if there is a redirect during the lifetime of the easy
handle). The previous patch only cleaned up the first one. The new fix now
removes the easy handle from all connections, not just the first one.
ran into some issues with the GSSAPI tests in configure.ac. The tests first
try to determine the include dirs and libs and set CPPFLAGS and LIBS
accordingly. It then checks for the headers and finally sets LIBS a second
time, causing the libs to be included twice. The first setting of LIBS seems
redundant and should be left out, since the first part is otherwise just
about finding headers.
My second issue is that 'krb5-config --libs gssapi' on Darwin is less than
useless and returns junk that, while it happens to work with gcc, causes
clang to choke. For example, --libs returns $CFLAGS along with the libs,
which is really retarded. Simply setting 'LIBS="$LIBS -lgssapi_krb5
-lresolv"' on Darwin is sufficient.
makes sure that when using sub-second timeouts, there's no final bad 1000ms
wait. Previously, a sub-second timeout would often make the elapsed time end
up the time rounded up to the nearest second (e.g. 1s for 200ms timeout)
the global timeout if set. Also, as was reported in the bug report #2956437
by Ryan Chan, the time stamp to use as basis for the per command timeout was
not set properly in the DONE phase for FTP (and not for SMTP) so I fixed
that just now. This was a regression compared to 7.19.7 due to the
conversion of FTP code over to the generic pingpong concepts.
http://curl.haxx.se/bug/view.cgi?id=2956437