(http://curl.haxx.se/bug/view.cgi?id=2784055) identifying a problem to
connect to SOCKS proxies when using the multi interface. It turned out to
almost not work at all previously. We need to wait for the TCP connect to
be properly verified before doing the SOCKS magic.
There's still a flaw in the FTP code for this.
(http://curl.haxx.se/bug/view.cgi?id=2786255) with a patch, identifying how
libcurl did not deal with SSL session ids properly if the server rejected a
re-use of one. Starting now, it will forget the rejected one and remember
the new. This change was for OpenSSL only, it is likely that other SSL lib
code needs similar fixes.
If the CURLOPT_PORT option is used on an FTP URL like
"ftp://example.com/file;type=A" the ";type=A" is stripped off.
I added test case 562 to verify, only to find out that I couldn't repeat
this bug so I hereby consider it not a bug anymore!
I've now made TFTP "connections" not being kept for re-use within libcurl.
TFTP is UDP-based so the benefit was really low (if even existing) to begin
with so instead of tracking down to fix this problem we instead removed the
re-use. I also enabled test case 1099 that I wrote a few days ago to verify
that this change fixes the reported problem.
Chen pointed out how curl couldn't upload with resume when reading from a
pipe.
This ended up with the introduction of a new return code for the
CURLOPT_SEEKFUNCTION callback that basically says that the seek failed but
that libcurl may try to resolve the situation anyway. In our case this means
libcurl will attempt to instead read that much data from the stream instead
of seeking and that way curl can now upload with resume when data is read
from a stream!
Koenig pointed out that the man page didn't tell that the *_proxy
environment variables can be specified lower case or UPPER CASE and the
lower case takes precedence,
how it occurs (http://curl.haxx.se/mail/lib-2009-04/0289.html). The
conclusion was that if an error is detected and Curl_done() is called for
the connection, ftp_done() could at times return another error code that
then would take precedence and that new code confused existing logic that
works for the first error code (CURLE_SEND_ERROR) only.
OBJECTPOINT options. Now we've introduced a new function - my_setopt_str -
within the app for setting plain string options to avoid the risk of this
mistake happening.
proxy. libcurl would then wrongly close the connection after each
request. In his case it had the weird side-effect that it killed NTLM auth
for the proxy causing an inifinite loop!
I added test case 1098 to verify this fix. The test case does however not
properly verify that the transfers are done persistently - as I couldn't
think of a clever way to achieve it right now - but you need to read the
stderr output after a test run to see that it truly did the right thing.
Storsjo pointed out how setting CURLOPT_NOBODY to 0 could be downright
confusing as it set the method to either GET or HEAD. The example he showed
looked like:
curl_easy_setopt(curl, CURLOPT_PUT, 1);
curl_easy_setopt(curl, CURLOPT_NOBODY, 0);
The new way doesn't alter the method until the request is about to start. If
CURLOPT_NOBODY is then 1 the HTTP request will be HEAD. If CURLOPT_NOBODY is
0 and the request happens to have been set to HEAD, it will then instead be
set to GET. I believe this will be less surprising to users, and hopefully
not hit any existing users badly.
out to be leaking cacerts. Kamil Dudka helped me complete the fix. The issue
is found in Redhat's bug tracker:
https://bugzilla.redhat.com/show_bug.cgi?id=453612
There are still memory leaks present, but they seem to have other reasons.
and 1 on fatal errors. Previously it only mentioned non-zero on fatal
errors. This is a slight change in meaning, but it follows what we've done
elsewhere before and it opens up for LOTS of more useful return codes
whenever we can think of them...
non-configured libcurl. In this case curl_off_t data type was gated
to the off_t data type which depends on the _FILE_OFFSET_BITS. This
configuration is exactly the unwanted configuration for our curl_off_t
data type which must not depend on such setting. This breaks ABI for
libcurl libraries built with Sun compilers which were built without
having run the configure script with _FILE_OFFSET_BITS different than
64 and using the ILP32 data model.
curl_easy_duphandle did not necessarily duplicate the CURLOPT_COOKIEFILE
option. It only enabled the cookie engine in the destination handle if
data->cookies is not NULL (where data is the source handle). In case of a
newly initialized handle which just had the cookie support enabled by a
curl_easy_setopt(handle, CURL_COOKIEFILE, "")-call, handle->cookies was
still NULL because the setopt-call only appends the value to
data->change.cookielist, hence duplicating this handle would not have the
cookie engine switched on.
We also concluded that the slist-functionality would be suitable for being
put in its own module rather than simply hanging out in lib/sendf.c so I
created lib/slist.[ch] for them.
scripts to make it detect a bad checkout earlier. People with older
checkouts who don't do cvs update with the -d option won't get the new dirs
and then will get funny outputs that can be a bit hard to understand and
fix.
in the gnutls code where we were checking for negative values for errors,
when the man pages state that GNUTLS_E_SUCCESS is returned on success and
other values indicate error conditions.
curl didn't use sprintf() in a way that is documented to work in POSIX but
since we use our own printf() code (from libcurl) that shouldn't be a
problem. Nonetheless I modified the code to not rely on such particular
features and to not cause further raised eyebrowse with no good reason.
(http://curl.haxx.se/docs/adv_20090303.html also known as CVE-2009-0037) in
which previous libcurl versions (by design) can be tricked to access an
arbitrary local/different file instead of a remote one when
CURLOPT_FOLLOWLOCATION is enabled. This flaw is now fixed in this release
together this the addition of two new setopt options for controlling this
new behavior:
o CURLOPT_REDIR_PROTOCOLS controls what protocols libcurl is allowed to
follow to when CURLOPT_FOLLOWLOCATION is enabled. By default, this option
excludes the FILE and SCP protocols and thus you nee to explicitly allow
them in your app if you really want that behavior.
o CURLOPT_PROTOCOLS controls what protocol(s) libcurl is allowed to fetch
using the primary URL option. This is useful if you want to allow a user or
other outsiders control what URL to pass to libcurl and yet not allow all
protocols libcurl may have been built to support.
curl_global_init() function to properly maintain the performing functions
thread-safe. We've previously (28 April 2007) moved the init to a later time
just to avoid it to fail very early when libgcrypt dislikes the situation,
but that move was bad and the fix should rather be in libgcrypt or
elsewhere.
CURLINFO_CONTENT_LENGTH_DOWNLOAD and CURLINFO_CONTENT_LENGTH_UPLOAD return
-1 if the sizes aren't know. Previously these returned 0, make it impossible
to detect the difference between actually zero and unknown.
FTP with the multi interface: when a transfer fails, like when aborted by a
write callback, the control connection was wrongly closed and thus not
re-used properly.
This change is also an attempt to cleanup the code somewhat in this area, as
now the FTP code attempts to keep (better) track on pending responses
necessary to get read in ftp_done().
libcurl did a superfluous 1000ms wait when doing SFTP downloads!
We read data with libssh2 while doing the "DO" operation for SFTP and then
when we were about to start getting data for the actual file part, the
"TRANSFER" part, we waited for socket action (in 1000ms) before doing a
libssh2-read. But in this case libssh2 had already read and buffered the
data so we ended up always just waiting 1000ms before we get working on the
data!
plain FTP connections, and it will then allow MKD to fail once and retry the
CWD afterwards. This is especially useful if you're doing many simultanoes
connections against the same server and they all have this option enabled,
as then CWD may first fail but then another connection does MKD before this
connection and thus MKD fails but trying CWD works! The numbers can
(should?) now be set with the convenience enums now called
CURLFTP_CREATE_DIR and CURLFTP_CREATE_DIR_RETRY.
Tests has proven that if you're making an application that uploads a set of
files to an ftp server, you will get a noticable gain in speed if you're
using multiple connections and this option will be then be very useful.
the condition in the previous request was unmet. This is typically a time
condition set with CURLOPT_TIMECONDITION and was previously not possible to
reliably figure out. From bug report #2565128
(http://curl.haxx.se/bug/view.cgi?id=2565128)
interface and setting CURLMOPT_MAXCONNECTS to something less than the number
of handles you add to the multi handle. All the connections that didn't fit
in the cache would not be properly disconnected nor freed!
version 1.1 instead of 1.0 like before. This change also introduces the new
proxy type for libcurl called 'CURLPROXY_HTTP_1_0' that then allows apps to
switch (back) to CONNECT 1.0 requests. The curl tool also got a --proxy1.0
option that works exactly like --proxy but sets CURLPROXY_HTTP_1_0.
I updated all test cases cases that use CONNECT and I tried to do some using
--proxy1.0 and some updated to do CONNECT 1.1 to get both versions run.
enabled, we can now take advantage of its brand new AF_UNSPEC support in
ares_gethostbyname(). This makes test case 241 finally run fine for me wtih
this setup since it now parses the "::1 ip6-localhost" line fine in my
/etc/hosts file!
(http://curl.haxx.se/bug/view.cgi?id=2550061) mentioning that I failed to
properly make sure that the VC9 makefiles got included in the latest
release. I've now fixed the release script and verified it so next release
will hopefully include them properly!
Curl_sspi_global_init() and Curl_sspi_global_cleanup() which previously were
named Curl_ntlm_global_init() and Curl_ntlm_global_cleanup() in http_ntlm.c
Also adjusted socks_sspi.c to remove the link-time dependency on the Windows
SSPI library using it now in the same way as it was done in http_ntlm.c.
CURLOPT_SOCKS5_GSSAPI_SERVICE and CURLOPT_SOCKS5_GSSAPI_NEC to allow libcurl
to do GSS-style authentication with SOCKS5 proxies. The curl tool got the
options called --socks5-gssapi-service and --socks5-gssapi-nec to enable
these.
disable "rfc4507bis session ticket support". rfc4507bis was later turned
into the proper RFC5077 it seems: http://tools.ietf.org/html/rfc5077
The enabled extension concerns the session management. I wonder how often
libcurl stops a connection and then resumes a TLS session. also, sending the
session data is some overhead. .I suggest that you just use your proposed
patch (which explicitly disables TICKET).
If someone writes an application with libcurl and openssl who wants to
enable the feature, one can do this in the SSL callback.
Sharad Gupta brought this to my attention. Peter Sylvester helped me decide
on the proper action.
(http://curl.haxx.se/bug/view.cgi?id=2535504) pointing out that realms with
quoted quotation marks in HTTP Digest headers didn't work. I've now added
test case 1095 that verifies my fix.
They basically offer the same thing the NO_PROXY environment variable only
offered previously: list a set of host names that shall not use the proxy
even if one is specified.
clarity. This does fix one problem that causes ;type=i FTP URLs
to fail in the Turkish locale when CURLOPT_PROXY_TRANSFER_MODE is
used (test case 561)
Added tests 561 and 1092 through 1094 to test various combinations
of ;type= and ;mode= URLs that could potentially fail in the Turkish
locale.
by Daniel Black, I've now added magic to the configure script that makes it
use pkg-config to detect gnutls details as well if the existing method
(using libgnutls-config) fails. While doing this, I cleaned up and unified
the pkg-config usage when detecting openssl and nss as well.
When using the multi interface over HTTP and the server returns a Location
header, the running easy handle will get stuck in the CURLM_STATE_PERFORM
state, leaving the external event loop stuck waiting for data from the
ingoing socket (when using the curl_multi_socket_action stuff). While this
bug was pretty hard to find, it seems to require only a one-line fix. The
break statement on line 1374 in multi.c caused the function to skip the call
to multistate().
How to reproduce this bug? Well, that's another question. evhiperfifo.c in
the examples directory chokes on this bug only _sometimes_, probably
depending on how fast the URLs are added. One way of testing the bug out is
writing to hiper.fifo from more than one source at the same time.
curl_easy_reset() by creating Curl_init_userdefined(). This had the side effect
of fixing curl_easy_reset() so it now also resets CURLOPT_FTP_FILEMETHOD and
CURLOPT_SSL_SESSIONID_CACHE
I have to jump through a few hoops now with the NSS library initialization
since another part of an application may have already initialized NSS by the
time Curl gets invoked. This patch is more careful to only shutdown the NSS
library if Curl did the initialization.
It also adds in a bit of code to set the default ciphers if the app that
call NSS_Init* did not call NSS_SetDomesticPolicy() or set specific
ciphers. One might argue that this lets other application developers get
lazy and/or they aren't using the NSS API correctly, and you'd be right.
But still, this will avoid terribly difficult-to-trace crashes and is
generally helpful.
(http://curl.haxx.se/bug/view.cgi?id=2413067) that identified a problem that
would cause libcurl to mark a DNS cache entry "in use" eternally if the
subsequence TCP connect failed. It would thus never get pruned and refreshed
as it should've been.
pipelining, as libcurl could then easily get confused and A) work on the
handle that was not "first in queue" on a pipeline, or even B) tell the app
to REMOVE a socket while it was in use by a second handle in a pipeline. Both
errors caused hanging or stalling applications.
was actually ready to get done, as the internal time resolution is higher
than the returned millisecond timer. Therefore it could cause applications
running on fast processors to do short bursts of busy-loops.
curl_multi_timeout() will now only return 0 if the timeout is actually
alreay triggered.
now has an improved ability to do right when the multi interface (both
"regular" and multi_socket) is used for SCP and SFTP transfers. This should
result in (much) less busy-loop situations and thus less CPU usage with no
speed loss.
operation didn't complete properly if the EAGAIN equivalent was returned but
libcurl would simply continue with a half-completed close operation
performed. This ruined persistent connection re-use and cause some
SSH-protocol errors in general. The correction is unfortunately adding a
blocking function - doing it entirely non-blocking should be considered for
a better fix.
removing easy handles from multi handles when the easy handle is/was within
a HTTP pipeline. His bug report #2351653
(http://curl.haxx.se/bug/view.cgi?id=2351653) was also related and was
eventually fixed by a patch by Igor himself.
duphandle+curl_mutli" (http://curl.haxx.se/bug/view.cgi?id=2416182) showed
that curl_easy_duphandle() wrongly also copied the pointer to the connection
cache, which was plain wrong and caused a segfault if the handle would be
used in a different multi handle than the handle it was duplicated from.
there are servers "out there" that relies on the client doing this broken
Digest authentication. Apache even comes with an option to work with such
broken clients.
The difference is only for URLs that contain a query-part (a '?'-letter and
text to the right of it).
libcurl now supports this quirk, and you enable it by setting the
CURLAUTH_DIGEST_IE bit in the bitmask you pass to the CURLOPT_HTTPAUTH or
CURLOPT_PROXYAUTH options. They are thus individually controlled to server
and proxy.
particular state for the control connection like it did before for implicit
FTPS (libcurl assumed such control connections to be encrypted while some
FTPS servers such as FileZilla assumes such connections to be clear
mode). Use the CURLOPT_USE_SSL option to set your desired level.
researching it, it turned out he got a 550 response back from a SIZE command
and then I fell over the text in RFC3659 that says:
The presence of the 550 error response to a SIZE command MUST NOT be taken
by the client as an indication that the file cannot be transferred in the
current MODE and TYPE.
In other words: the change I did on September 30th 2008 and that has been
included in the last two releases were a regression and a bad idea. We MUST
NOT take a 550 response from SIZE as a hint that the file doesn't exist.
(http://curl.haxx.se/bug/view.cgi?id=2221237) that identified an infinite
loop during GSS authentication given some specific conditions. With his
patience and great feedback I managed to narrow down the problem and
eventually fix it although I can't test any of this myself!
(http://curl.haxx.se/bug/view.cgi?id=2351645) that identified a problem with
the multi interface that occured if you removed an easy handle while in
progress and the handle was used in a HTTP pipeline.
function when built to support SCP and SFTP that helps the library to know
in which direction a particular libssh2 operation would return EAGAIN so
that libcurl knows what socket conditions to wait for before trying the
function call again. Previously (and still when using libssh2 0.18 or
earlier), libcurl will busy-loop in this situation when the easy interface
is used!
when uploading files to a single FTP server using multiple easy handle
handles with the multi interface. Occasionally a handle would stall in
mysterious ways.
The problem turned out to be a side-effect of the ConnectionExists()
function's eagerness to re-use a handle for HTTP pipelining so it would
select it even if already being in use, due to an inadequate check for its
chances of being used for pipelnining.
(http://curl.haxx.se/bug/view.cgi?id=2255627) which pointed out that a
program using libcurl's multi interface to download a HTTPS page with a
libcurl built powered by OpenSSL, would easily get silly and instead hand
over SSL details as data instead of the actual HTTP headers and body. This
happened because libcurl would consider the connection handshake done too
early. This problem was introduced at September 22nd 2008 with my fix of the
bug #2107377
The correct fix is now instead done within the GnuTLS-handling code, as both
the OpenSSL and the NSS code already deal with this situation in similar
fashion. I added test case 560 in an attempt to verify this fix, but
unfortunately it didn't trigger it even before this fix!
problem with MSVC 6 makefile that caused a build failure. It was noted that
the curl_addrinfo.obj reference was missing. I took the opportunity to sort
the list in which this was missing.
problem with my CURLINFO_PRIMARY_IP fix from October 7th that caused a NULL
pointer read. I also took the opportunity to clean up this logic (storing of
the connection's IP address) somewhat as we had it stored in two different
places and ways previously and they are now unified.
can be created before resolving the IPv6 name. In the context of running
a test, it doesn't make sense to run an IPv6 test when a host is resolvable
but IPv6 isn't usable. This should fix failures of test 1085 on hosts with
library and DNS support for IPv6 but where actual use of IPv6 has been
administratively disabled.
make CURLOPT_PROXYUSERPWD sort of deprecated. The primary motive for adding
these new options is that they have no problems with the colon separator
that the CURLOPT_PROXYUSERPWD option does.
(http://curl.haxx.se/bug/view.cgi?id=2154627) which pointed out that libcurl
uses strcasecmp() in multiple places where it causes failures when the
Turkish locale is used. This is because 'i' and 'I' isn't the same letter so
strcasecmp() on those letters are different in Turkish than in English (or
just about all other languages). I thus introduced a totally new internal
function in libcurl (called Curl_ascii_equal) for doing case insentive
comparisons for english-(ascii?) style strings that thus will make "file"
and "FILE" match even if the Turkish locale is selected.
return code. This way, if the precheck command can't be run at all for
whatever reason, it's treated as a precheck failure which causes the
test to be skipped.
(http://curl.haxx.se/bug/view.cgi?id=2155496) pointing out an error case
without a proper human-readable error message. When a read callback returns
a too large value (like when trying to return a negative number) it would
trigger and the generic error message then makes the proplem slightly
different to track down. I've added an error message for this now.
systems supporting getifaddrs(). Also fixed a problem where an IPv6
address could be chosen instead of an IPv4 one for --interface when it
involved a name lookup.
fixed a CURLINFO_REDIRECT_URL memory leak and an additional wrong-doing:
Any subsequent transfer with a redirect leaks memory, eventually crashing
the process potentially.
Any subsequent transfer WITHOUT a redirect causes the most recent redirect
that DID occur on some previous transfer to still be reported.
eventually identified a flaw in how the multi_socket interface in some cases
missed to call the timeout callback when easy interfaces are removed and
added within the same millisecond.
curl_easy_setopt: CURLOPT_USERNAME and CURLOPT_PASSWORD that sort of
deprecates the good old CURLOPT_USERPWD since they allow applications to set
the user name and password independently and perhaps more importantly allow
both to contain colon(s) which CURLOPT_USERPWD doesn't fully support.
a fresh connection to be made in such cases and the request retransmitted.
This should fix test case 160. Added test case 1079 in an attempt to
test a similar connection dropping scenario, but as a race condition, it's
hard to test reliably.
the app re-used the handle to do a connection to host B and then again
re-used the handle to host A, it would not update the info with host A's IP
address (due to the connection being re-used) but it would instead report
the info from host B.
gets a 550 response back for the cases where a download (or NOBODY) is
wanted. It still allows a 550 as response if the SIZE is used as part of an
upload process (like if resuming an upload is requested and the file isn't
there before the upload). I also modified the FTP test server and a few test
cases accordingly to match this modified behavior.
switching from one protocol to another in a single request (e.g.
redirecting from HTTP to FTP as in test 1055) by resetting
state.expect100header before every request.
date parser function. This makes our function less dependent on system-
provided functions and instead we do all the magic ourselves. We also no
longer depend on the TZ environment variable.
Markus Moeller reported: http://curl.haxx.se/mail/archive-2008-09/0016.html
- recv() errors other than those equal to EAGAIN now cause proper
CURLE_RECV_ERROR to get returned. This made test case 160 fail so I've now
disabled it until we can figure out another way to exercise that logic.
proxy" (http://curl.haxx.se/bug/view.cgi?id=2107377) that showed how a multi
interface using program didn't work when built with GnuTLS and a CONNECT
request was done over a proxy (basically test 502 over a proxy to a HTTPS
site). It turned out the ssl connect function would get called twice which
caused the second call to fail.
sites in cases where the cookie clearly has a very old expiry date. The
condition was simply that libcurl's date parser would fail to convert the
date and it would then count as a (timed-based) match. Starting now, a
missed date due to an unsupported date format or date range will now cause
the cookie to not match.